Data sonification for citizen science

Posted May 23, 2016 by Amber Griffiths

We're working on two sonification projects at FoAM Kernow - Red King with Ben Ashby and Sonic Kayaks with Kaffe Matthews - so I've been doing a bit of research into the field to better familiarise myself with it...

Audification seems to be data sonification at its purest – simply transforming data directly into sound. This approach has been around for a while - familiar examples include Geiger counters which transforms ionizing radiation levels into audible clicks, and pulse oximeters which emit higher pitches for higher blood oxygen concentrations (the little clip that goes on your finger at the hospital, feeding into a machine that beeps). Transdisciplinary researcher Till Boverman pointed out to me that this works in a medical setting because our ear can't physically shut out sound – “sounds are always (unconsciously) processed and then forwarded to the consciousness (or not), depending on its unfamiliarity”.

Sonification seems to be a bit more of a flexible term - people who spend their lives thinking about data sonification have come up with extensive definitions – my favourite is along the lines of 'systematic transformations that are reproducible, and have a precise connection with the underlying data'.

Audification and sonification can be used to sense, explore, and understand data, and are essentially just the auditory equivalent of visualisation. Detecting changes in rhythmic patterns, pitch, volume, tempo, timbre etc. can give insights into changes in scientific model outputs or empirical data.

The possibilities are interesting, but the examples that spring to mind are a little underwhelming. Most follow the approximate formula – find-some-data-and-play-a-tune-with-it, and that's the end of that . A fab example of the limitations of data sonification comes from Sarah Angliss , who used handbells to express the first 48 digits of the transcendental number e, versus the metric tonnes of salmon sold each day on the London Stock Exchange. If you can tell the difference you are a better person than me.

As Sarah says in her blog “Arguably, music from data can be an entertaining way to get some more arcane research projects into the public consciousness”. To do something a little more interesting with sonification, I think we have to start with the question...

Is sonification actually better than visualisation for anything?

Looking at sound art papers I kept coming across statements that human hearing is better than other senses at recognising temporal differences – but citations were ominously absent. I found a few papers (e.g. this one) comparing visual with auditory sensing, but with a scientific hat on these tend to look fatally flawed - with extremely small and highly biased sample sizes – study subjects are typically relatively young, often from media or music departments at Universities, and WEIRD (White, Educated, Industrialised, Rich, Democratic). The studies also typically lack controls or sophisticated statistical analyses to account for co-variables.

I did a little digging into the cognitive science/neuroscience literature and emailed around a few researchers. It looks like from a strictly neuronal point of view, we're quicker at hearing than seeing – so maybe sound is better for temporal perception. It takes vision about 70 milliseconds for signals from the eye to reach the primary visual cortex, and it takes auditory stimuli something like 20 milliseconds from the ear to the auditory cortex (health warning - these numbers appear in neuroscience textbooks but I haven't been able to find any original citation). It looks from a recent paper like these senses might be more intertwined than those textbooks imply – but it does look like our spatial memory mostly uses visual information, while our temporal memory mostly uses auditory information.

I also received this from neuroscience researcher Jean-Paul Noel, “In terms of detecting small discrepancies (thresholds) for temporal or spatial gaps, temporal gaps are smaller for auditory stimuli and spatial gaps are smaller for visual stimuli”. There's enough there for me to be reasonably convinced that sonifcation would be a decent approach for exploring data – and that the rhythmical patterns are probably the most important aspect of a data sonification.

There is an 'it's more complicated than that' caveat though - If the signals aren't reliable enough, the senses help each other out – for example if visual input is rubbish, sound becomes more important. This indicates that any sonification for data analysis purposes better be pretty clear, or we'll start looking for other sensory inputs.

Some spare thoughts

There doesn't seem a whole lot of point unless the audification/sonification provides something that just looking at the data can't – for most people it's going to be quicker and easier to graph their data than to turn it into sound, and if it conveys the same information which is equally well perceived, why bother sonifying it. The higher speed of auditory than visual sensing could be useful though. Till Boverman wrote to me say 'audification can be useful for skimming through large datasets or comparing similar large datasets' – and the biology seems to support that statement. But - this example shows nicely one reason why we need to be a little careful of the 'auditory sensing is faster' argument – it takes 3:53 minutes to watch that video, whereas it would take a second or two to visually perceive the same data on a graph. It isn't exactly a large dataset, so I think Till's statement still holds.

If we use sonification for detecting changes in large datasets, and use rhythmical change as our method of detection, we might be on to something. Another bit of advice from Till is useful here: “being trained to listen to rhythmical patterns in music makes us sensitive to rhythmical distortions; in general, everything that irritates our (culturally trained) sonic pattern recognition system (trained e.g. by listening to music of various styles) will pop out of the stream of information rushing by”. Having a background stream of sound ticking over while you get on with other things (like the oximeter in hospitals) seems like the perfect use for sonification – but the sound would have to be carefully thought out so it wasn't just irritating. The other nice thing about oximeters is that they're live data audification - making the sound live generated seems a good way to make it more interesting/relevant/useful.

One other approach I left out is to facilitate people hearing things they otherwise wouldn't - although it kind of is, this doesn't seem to be considered as data audification or sonification, but 'field recording' - there are some great examples out there like The Sound of the Earth from Lotte Geeven. For our Sonic Kayak project we're going to be doing a bit of this, with some real-time data sonification over the top.