September 30, 2023

Reconstructing rock music from brain activity –

Music activates a widespread network in our brain. The researchers have now used the brain activity of test subjects with electrodes in their brains to reconstruct whatever piece of music they were listening to while recording. Artificial intelligence translated brainwaves into a recognizable version of Pink Floyd’s Another Brick in the Wall. In the long term, the technology could help build more robust brain-computer interfaces for people who have lost their ability to speak. Instead of instrumental sentences, the melody of intended speech can also be reconstructed.

Music and language are closely related, and the melody of speech provides important information about how we mean something and the emotions we feel when we hear it. Previous speech computers for people who can no longer speak due to paralysis, issue sentences in a monotonous, robot-like voice. The so-called overtures, that is, rhythm, tension, tone and tone, are excluded. Until now, it was difficult even to read intended words from brain activity. So far, melodic elements have been completely absent, especially since their processing and formation in the brain is just beginning to be understood.

signals from the surface of the brain

A team led by Ludovic Pellet at the University of California, Berkeley, has now succeeded in reconstructing a piece of music that can be recognized solely from the brain activity of music listeners. They used a dataset of 29 people tested who had electrodes implanted in their brains because of epilepsy. These electrodes allowed the researchers to record signals directly on the surface of the brain, which is much more accurate than recordings on the scalp.

See also  Astronauts discover new cracks in the Russian part of the International Space Station | Science

For the experiments, the subjects listened to a roughly three-minute snippet of Pink Floyd’s rock song “Another Brick in the Wall” while their brain activity was recorded. The survey was conducted in 2012 and 2013. With the technology available at the time, information about the genre of music could only be obtained from brain activity. However, Bieler and his team analyzed the data again using modern methods of speech recognition and with the support of artificial intelligence.

Spectral chart of the audio themes (above) and the reconstructed version of the Pink Floyd song. © Bellier et al., 2023 / PLOS Biology, CC by 4.0

Reconstructed iconic piece of music

And indeed: “We were able to reconstruct a recognizable song directly from the neural recordings,” the team says. The melody and rhythm were just right and the lyrics were slurred but understandable. In order to see which brain regions are of particular interest to decoding, the researchers excluded signals from individual groups of more than 2,500 electrodes from evaluation in further analysis steps. In this way, they discovered that three brain regions in particular react specifically to music: the superior temporal gyrus, the inferior frontal gyrus, and the sensorimotor cortex. “In the superior temporal gyrus, we discovered a previously unknown sub-region that specifically responds to musical rhythm,” the team says.

They also identified structures that are particularly active when singing or when a musical instrument is started up again. While the left hemisphere tends to dominate when processing language, the results show that reactions to music mainly occur in the right hemisphere.

Musical interfaces between the brain and the computer?

“As brain-computer interfaces advance, the new findings present an opportunity to add music to future brain implants for people with neurological or developmental disorders that impair speech,” says Robert Knight, a Bellair colleague. “With that, you can decode not only the linguistic content, but also part of the modal content of the language, part of the effect.” While other research teams working on brain-computer interfaces for speech recognition focus mostly on motor areas involved in coordinating the tongue, lips, and larynx, the current study focuses on auditory regions.

See also  Type 1 diabetes can affect adults, too

“Decoding from the auditory cortex, which is closer to the sounds of sounds than the motor cortex, is very promising,” adds Pellet. “This gives more color to what is being decoded.” However, it is unclear whether the discovery would have worked at all without electrodes implanted in the brain. “Current non-invasive technologies are not precise enough,” says Pellet. “Let’s hope for patients that in the future we will be able to read activity in deep brain regions with good signal quality with the help of electrodes attached to the outside of the skull. But we are still far from that.”

Source: Ludovic Bellier (University of California, Berkeley) et al., PLoS Biology, Available here. doi: 10.1371/journal.pbio.3002176