Neuroscientists collect huge amounts of data, ranging from brain activity measurements to behavioral observations. Finding patterns in those data can be difficult even for computers, but for humans it’s actually relatively easy to match sounds and videos. Knowing that, neuroscientists have come up with a new way to translate some data into music videos that make it easier to explore and spot new information.
The research group of Elizabeth Hillman at Columbia University started by turning brain imaging data into sounds using a programme called PyAnthem (inspired by the software Anthem mentioned in “Dirk Gently’s Holistic Detective Agency” by Douglas Adams).
Neuroimaging data is usually shown visually, with different colors or color intensities on a map of the brain. Hillman’s group turned that information into sound. “Each region was a note, and when it was active, that note would get louder,” Hillman said in an interview for the scientific journal PLOS One.
David Thibodeaux, a graduate student in Hillman’s group, then used his musical expertise to improve the sounds that this program generated and make the output sound more musical.
“He also came up with the idea to represent different variables as different musical instruments,” added Hillman.
These sound-based data can be useful when you need to explore multiple datasets at once. For example, some types of studies could involve having a video recording of mice performing a task as well as data from brain measurements of these mice. If you displayed them side by side as videos, it would be possible – but difficult – to see exactly what is happening in both videos at once. By turning one dataset into sound, it becomes easier to pair the two – one set of data as video, and the other as sound.
“Our auditory system is really incredible – not just in the way it encodes sounds, but our ability to process and interpret all that we hear,” says Hillman. “Our brains are also amazing in their ability to integrate information from our eyes and ears.” She gives the example of how we immediately spot when the audio and video of a film are not properly synced.
That ability to match sound and sight while watching a video can be very useful to explore information from complex sets of data. It’s something we can do easily, but which a computer would have difficulty with because it doesn’t yet know which information is relevant.
The video below shows an example of how data from measurements in a mouse brain were turned into sound using Hillman’s and Thibodeaux’s software. It comes from their recent research paper, which has other images and videos as well.
Here, the sound is matched closely to the visual data that it originated from, but you can imagine placing this sound over a video of different measurements taken at the same time to see how the sound lines up with that.
These musical videos don’t replace detailed data analysis, but they can be used to make it easier to spot new patterns or new information in a way that is difficult to see in calculations or visual graphs.