Lisa Muratori is a professor of physical therapy who works with patients suffering from neurological conditions, like Parkinson’s, that might impair their strides. “Gait is important,” she notes—if you’re walking too slowly or unevenly, you’re more liable to have accidents.

One tricky part of her practice is helping a patient figure out when their gait is drifting away from a stable pattern. Muratori’s solution: Put sensors in their shoes, which creates a terrific stream of data. The numbers show precisely when that walk goes wonky. But how should she show the patients that data? If you’re trying not to fall while wandering down the sidewalk, it’s crazy to peer at a screen.

So Muratori shifted senses, from the eyes to the ears, training patients to listen to their data. She collaborated with Margaret Schedel, a professor of music at Stony Brook University, to design software that picks up when a person’s stride goes off-kilter and alerts them by distorting the sound of an audiobook or music or whatever is playing in their earbuds. That way patients can instantly—and almost subconsciously—perceive errors and correct them. It’s an example of an intriguing new evolution in our big-data world: sonification, expressing data through sound.

Related Stories

  • Clive Thompson

    How AI Will Turn Us All Into Filmmakers

  • Sarah Scoles

    A Strange Kind of Data Tracks the Weather—and Pirate Ships

  • David Pierce

    Turn Off Your Push Notifications. All of Them

Normally, of course, we think of data as visual, something we transform into charts and graphs when we want to see trend lines. But the ear is exquisitely sensitive and has abilities the eye doesn’t. While the eye is superior at perceiving sizes and ratios, the ear is better at detecting patterns that occur over time. It’s great for sensing fluctuations, even the most subtle ones.

For example, music professor Mark Ballora and meteorologist Jenni Evans, both at Penn State, recently turned hurricane data into a series of whooping sounds. In sonic form, they could highlight when a hurricane was moving into a lower-pressure mode and thus intensifying. Meanwhile, Wanda Díaz Merced, an astronomer at the South African Astronomical Observatory, discovered she could study the mechanics of supernova explosions by listening to gamma-ray bursts. “It was such an epiphany,” she tells me. “I could hear things you couldn’t as easily see in the data.”

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

So certainly sonification can be useful in science and medicine. But I think it could also be a boon in our everyday lives. We’re already walking around in our own sonic world, with smartphone-­connected headphones plugged into our ears. And app notifications—the ding of the incoming text—are little more than simple forms of data turned into sound. Now imagine if those audio alerts were more sophisticated: What if they connoted something about the content of the text? That way, you could know whether to pull out your phone immediately or just read the message later. Or imagine if your phone chirped out a particular sequence or melodic pattern that informed you of the quality—the emotional timbre, as it were—of the email piling up in your inbox. (Routine stuff? A sudden burst of urgent activity from your team?) You could develop a sophisticated, but more ambient, sense of what was going on.

None of us need a cacophony of sonic alerts, of course, and there are limits to our auditory attention. But done elegantly, sonification could help create a world where you’re still as informed as you want to be, but hopefully less frayed by nervous glances at your screens. This could make our lives a bit safer too: Research at Georgia Tech’s sonification lab found that if car computer systems expressed more data audibly, we’d be less distracted while driving. Like Muratori’s patients, we could all benefit from having our ears a little closer to the ground.