Menu
Aeon
DonateNewsletter
SIGN IN
Black and white photo of silhouettes of people and a child in a stroller walking by a window covered in raindrops on a sunny day.

A fountain on a timer drops a shower of water droplets every few minutes in Darling Harbour, Sydney, Australia. Photo by Trent Parke/Magnum

i

Sonifying the world

When Chris Chafe translates data into music, listeners sway to the beat of seizing brains, economic swings and smog

by Carren Jao + BIO

A fountain on a timer drops a shower of water droplets every few minutes in Darling Harbour, Sydney, Australia. Photo by Trent Parke/Magnum

We might never know when the first set of thuds, thumps and taps were strung together to make music, or when people sang the first songs, but it is incontrovertible that our lives are seeped in rhythms and beats. We tap our feet. We bob our heads. We sing in the shower. Never mind that we might not even be able to carry a tune. We join in because it feels good, because music touches the deepest part of the self.

The British neurologist Oliver Sacks calls this mankind’s musicophilia. So innate is the attraction that many non-European languages don’t even have a word that translates as ‘music’. Instead, as the African ethnomusicology expert Ruth Stone at Indiana University explains, such cultures wrap singing, drama, dancing and instrumental performance into a ‘tightly bound complex of the arts’.

Even if musicians sometimes have trouble defining music, we know it is made up of sound: vibrating objects (such as the vibrating string of a guitar) push molecules outward, creating pressure waves that radiate from the source. Sound turned into music plays the human brain: it helps to ease anxiety, lowering cortisol levels more effectively than anti‑anxiety drugs. It fires the nucleus accumbens, a structure in the primitive limbic system, triggering dopamine and the same burst of pleasure as addictive drugs. And music builds social and cultural bonds – the lullabies of childhood, love songs, the rousing hymns of battle all work to nurture intimacy and cohesion in cultures around the world.

Unlike sex or hunger, music doesn’t seem absolutely necessary to everyday survival – yet our musical self was forged deep in human history, in the crucible of evolution by the adaptive pressure of the natural world. That’s an insight that has inspired Chris Chafe, Director of Stanford University’s Center for Computer Research in Music and Acoustics (or CCRMA, stylishly pronounced karma).

In his intensive, data-driven endeavour, Chafe takes the unnoticed rhythms of the natural world and ‘sonifies’ them, turning them into music – all the better to see how nature resonates with the music inside us. By pulling music out of the strangest places – from tomato plants, economic stats, even dirty air – he enables listeners to perceive phenomena viscerally, adding a new dimension of understanding to otherwise barely noticeable aspects of the world.

When Chafe first ‘played’ data extracted from the United States gross domestic product (GDP) alongside sounds derived from carbon dioxide levels, chills went up his spine. ‘It turned out the graphs were so similar that, at one point, the data was so tightly bound, I could hear a third [illusory] frequency,’ Chafe told me. Had he simply looked at the graphs for GDP and carbon dioxide, Chafe might have noted its similarity, but he wouldn’t have felt the way in which economic progress was so tightly bound to pollution levels.

‘There’s music in just about everything,’ said Chafe, a sprightly 62-year-old, whose eyes light up with intelligence. His voice has a bit of a surfer lilt thanks to his upbringing in Walnut Creek, California, where KPFA’s radio waves turned him on to a whole spectrum of music – not just classical, but world music, jazz and experimental. ‘It was ear-opening for a kid in suburbia to tune in and hear this other stuff called music also,’ he says.

We spoke one morning in his cozy, cluttered office set on an idyllic knoll on the Stanford campus, where years of work with fellow musicians have gifted his vocabulary with phrases such as ‘it’s a gas’ and ‘off on a jag’, along with the more conventional musical jargon of keys and scales. Wires snaked around the room, connecting a Linux desktop computer to a Zeta electric cello, which Chafe plays, and a 12-inch soundbar speaker. All in all, the message was clear: there is the music we think of – songs on the radio, classical harmonies in concert halls. Then there is ambient sound – the wind as its rushes through leaves. And there’s even the sound of our imagination – recalled voices, audio tracks replayed in our memories. All this can be music, as opposed to the more derogatory term ‘noise’.

Smog: using real-time data on levels of carbon dioxide, noise, temperature, humidity, light, noise and volatile organic compounds from cities such as Jeddah and Dubai, this riff reflects the sound of smog

In 1992, the US composer John Cage said: ‘There is no noise, only sound.’ Cage is best known for his 1952 composition 4’33”, during which listeners are treated to four minutes and 33 seconds of… nothing. A performer gets up in front of an audience in silence. Without a focus, listeners’ ears open to the ambient sounds instead.

To understand Chafe’s musical style, see the world as Neo did in The Matrix movies, through streams of 1s and 0s

Like Cage, Chafe challenges our concept of music. It doesn’t encompass only earworms from the likes of Bruno Mars or Taylor Swift. It could also harbour more experimental definitions such as Peter Brötzmann’s 1968 free-jazz octet ‘Machine Gun’ (yes, that’s what it sounds like) or Steve Roden’s 2001 piece ‘A Quiet Flexible Background for a Harmonious Life’, which is more like the hum of my refrigerator. This experimental genre is where Chafe’s music lies.

To understand Chafe’s musical style, see the world as Neo did in The Matrix movies, through streams of 1s and 0s. Phenomena that lend themselves to the treatment: the contagion of microbes; the major causes of death in the 20th century; the most profitable Hollywood films of the past five years. ‘There has been this conversion of almost everything into some form of data,’ Chafe explained. Even sound.

Take a keyboard. It is classic piano keys attached to a loudspeaker box. When you push the middle C key, it is pre-programmed to transmit a message to the loudspeaker box to play the sound associated with the number 60. If you had programming skills, you could re-associate number 60 with any sound – real or imagined – from the plink of a woodblock to the bleep of a computer. In the most basic sense, this is what Chafe does. With a free, open-source, downloadable program for music called ChucK, he transforms eye-glazing data: clarinets to represent carbon dioxide levels, for instance, overlaid with GDP data rendered in violins.

Chafe’s first public piece of the data-driven kind, ‘Ping’, was created in 2001 with his longtime collaborator, the digital artist Greg Niemeyer of the University of California, Berkeley. This exhibit for the San Francisco Museum of Modern Art included eight aluminium loudspeaker towers arranged something like Stonehenge with a big steering wheel in the middle for visitors to move the speakers – and the sound: a combination of plucked guitars and banged aluminium, a bit like drums during a Chinese New Year parade. This was the heartbeat of a technological world, what we had been missing since the inception of the World Wide Web.

Another composition, ‘Oxygen Flute’ (2002), made photosynthesis sense-able. In this installation, Niemeyer built a sealed chamber out of welded metal and translucent silicone rubber. A metal walkway led visitors into the humid box where Niemeyer’s team planted bamboo stalks and hid gas sensors throughout.

As the bamboo stalks grew, they would take in carbon dioxide and release oxygen, which would be read by the sensors and transmitted to a nearby computer. The computer would then record the amount of gas in the air, measured in parts per million, and trigger a program that modulates pitch and length of notes played by simulated flute. In the background, the popcorn-like sound of thousands of snapping shrimp recorded from Hopkins Marine Station in Monterey Bay filled in the silent spaces.

Greenhouse effect: in this piece, called ‘Oxygen Flute’, data from 58 live bamboo stalks, tomato seedlings and sauna-like heat were fed into a computer to allow listeners to feel the Earth’s greenhouse effect

Because of the chamber’s tight quarters, human breathing rates could also affect the music, allowing visitors to interact with their breath, influencing the composition and, hopefully, walking away with an increased consciousness of their participation in the gas cycles sustaining our planet.

Chafe and Niemeyer accomplished a similar feat in ‘Tomato Music’ (2007-11) with five vats of ripening tomatoes. For this, Niemeyer enclosed growing tomato plants in clear vats for 10 days. Each vat was outfitted with a sensor that measured ethylene, carbon dioxide, light, temperature and air movement. A computer translated these data into the computer-simulated sound of an ancient Greek hydraulic organ.

The result is a tapestry of echoes, shivers and blips, in a blanket of knocks and taps that jitter according to the rhythm of the world

In a third collaboration, ‘The Black Cloud’ (2008), Niemeyer’s team planted air sensors in Swiss consulates around the world (both Chafe and Niemeyer are Swiss-born). These cheerful-looking, red 3”x 5” sensors took in readings for light, temperature, humidity, noise, carbon dioxide and volatile organic compounds in locations as far as Kathmandu and Tokyo. Using this stream of data, Chafe composed an ever-changing landscape of algorithm-driven music: a combination of musical instruments sampled from around the world including a Chinese oboe, an African string instrument and a slide guitar. ‘Some dozen sensors were reporting from around the globe and I sonified their readings in real time. I wanted something pan-global as part of the sound texture, hence the choice of instruments.’

Artist that he is, Chafe didn’t associate one instrument to one variable. Rather, each data point could be assigned two or more instruments. ‘It’s like being a painter in a studio,’ he said. ‘At the beginning, you have a simple notion of what to put on a canvas, then it starts to develop as the work dictates.’ The result is a tapestry of echoes, shivers and blips, amid a blanket of knocks and taps that jitter according to the rhythm of the world.

Chafe’s composition was revelatory. ‘Do you remember the SARS epidemic?’ Chafe asked me. ‘There was a point at the height of the epidemic that Mexico City had a quarantine. They closed the schools, federal offices, and told everyone to stay home if they could. We caught a 5 per cent drop in carbon dioxide in Mexico City in that time.’ That reactive measure to the epidemic would have been a blip in the world’s collective radar, buried under thousands of other news headlines – but, sonified, it was a telling moment when the world slowed down in the face of tragedy.

The work is part of a burgeoning new musical movement embraced inside CCRMA, where steel pans, the bamboo angklung, Chinese gongs, and drums are as prevalent as synthesisers, laptops and EEG machines.

Computers became instruments as early as 1951, when Australia’s first programmable digital computer produced the first musical tone generated by a machine. A few years later, the US computer pioneer Max Matthews, an engineer at Bell Labs, was writing programs for sound generation and introducing a new form of computer-generated music to the world.

But to composers such as Chafee, it is not just music for music’s sake – it is also a portal into the hidden recesses of the natural world. ‘Music has always had what I call the extra-musical – things that aren’t music themselves,’ says Chafe. ‘Like love songs. We’re making a song, but there is a story outside of music that’s driving it.’

His latest foray into data-driven music isn’t just a fictionalised emotional rollercoaster ride, but the narrative unfolding of a brain seizure in real-time. Created in collaboration with the neuroscientist Josef Parvizi of the Stanford School of Medicine, this composition emerged as a way to convert EEG readings from patients experiencing brain seizures into actual audio. The aim is to save the lives of the comatose, where seizures are often invisible without elaborate brain-wave tests that must be evaluated by specialists off-site – all translating to irreversible damage because every minute lost is a minute against the patient.

the song of a brain in seizure is like a disgruntled, desperate fly in search of a way out of a sealed jar

If they succeed, Chafe and Parvizi are hoping that, instead of embarking on a slow process, medics will have access to a simple device, placed near the patient’s head, that would convert brain waves into an easy-to-interpret auditory signal. Instead of waiting hours, trained staff could render a diagnosis in minutes.

This is possible because the song of a brain in seizure is easy to distinguish: like a disgruntled, desperate fly in search of a way out of a sealed jar. ‘Seizures are rhythmic and very loud compared to other states,’ said Chafe, who set this biological music to the disconcerting sound of the human voice, in the hopes of building greater empathy in the listener.

Seizing brain: music generated by the desperate sound of the seizing brain can save the lives of the afflicted by quickly alerting staff to what’s happening

The music of the seizing brain is so distinct, in fact, that as many as 90 per cent of trained interpreters are able to pick it out. Chafe and Parvizi are now going through the rigorous research process that would pass US Food and Drug Administration standards in the hope of introducing a commercial medical device.

Whether artistic or scientific, sonifying our world has a way of wrenching our guts, producing visceral reactions that are frequently missing from the merely visual. A painting we can easily walk away from, but a song – pleasant or not – is inescapable. ‘Carl Sagan had a real nice insight about this,’ Chafe told me. ‘The effect of using your ears is the easiest way to achieve, for him, teleportation.’

Sound is a way to connect with another being. Just think of the last time you sang and danced with someone else. Suddenly, that lonely existence transformed into a heady feeling of communality. Musical sound also allows listeners to empathise with data, an impossible feat were it not for music’s emotive qualities. Suddenly, abstract seizing brain waves in graphic form elicit natural, jittery responses from listeners. What was once inscrutable has become graspable.

Our inescapable auditory sense opens us to a larger world. Animated by rhythm, tempo and timbre, music in all its incarnations wrenches us free from the confines of our physical selves, connects us to others and re-situates us as part of a bigger, more mystifying world.