Artificial intelligence managed to create music from the received brain records, while volunteers listened to pieces from 10 musical genres.The music thus created resembles the original musical stimuli, but the lyrics are incomprehensible because the technology does not yet take them into account.
Connected
Can you create music just by thinking about it? Well, that’s exactly what researchers from Google and Osaka University in Japan have achieved by using artificial intelligence (AI) to reconstruct music from human brain activity. The results of this research have been published on arXiv.
The researchers used music samples from 10 genres, including rock, classical, metal, hip-hop, pop and jazz, and played them to five subjects while observing their brain activity.
They recorded functional magnetic resonance imaging (fMRI) readings while they listened. fMRI readings, unlike MRI readings, record metabolic activity over time.
The readings were then used to train a deep neural network, which identified neural activity associated with different features of the music, such as genre, mood and instrumentation.
intermediate stage
an intermediate step brought to the study musiclm, This model, designed by Google, generates music based on text description. Like fMRI, it measures factors such as equipment, rhythm, and emotion.
An example of text input generated by MusicLM is: “Meditational, calm and relaxing song, featuring flute and guitar. Music is slow, focused on creating a feeling of peace and calm.”
The researchers combined the MusicLM database with the fMRI readings, allowing their AI model to reconstruct the music the subjects listened to. Instead of textual instructions, brain activity provided the context for the musical output.
syncopated music
The researchers called their AI model brain2music, “Our evaluation indicates that the reconstructed music resembles the original musical stimulus,” he explains. timo denkOne of the authors of this research, cited by Techxplore
He added, “The music generated resembles musical stimuli experienced by human subjects with respect to semantic properties such as style, instrumentation, and mood.”
In addition, they identified brain regions that reflect information from the textual description of the music.
The examples provided by the team display surprisingly similar pieces of music performed by Brain2Music based on subjects’ brain waves.
pending letters
One of the songs selected was Britney Spears’ first top 10 hit of the year 2000, “Oops!…I Did It Again”.
Many of the song’s musical elements, such as the sound of the instruments and the rhythm, matched each other, although the lyrics were incomprehensible. The researchers pointed out that the technology focused on the instrumentation and style, not the lyrics.
“This study is the first to provide a quantitative explanation from a biological perspective,” Denk said. But he acknowledged that despite advances in the text-to-music model, “their internal processes are still not well understood.”
These researchers conclude that AI is not yet ready to enter our brains and compose perfectly composed melodies, but that day may not be far off.
potential applications
The potential applications of this research are very diverse and innovative. This could allow people to create personalized music based on their preferences and emotional state.
It can also help people with neurological disabilities or disorders to express themselves musically without the need for physical equipment or interfaces.
Furthermore, it may open new avenues for the study and understanding of the human brain and its relationship with music.
Reference
Brain2Music: Reconstruction of music from human brain activity. Timo I. Denk et al. arXiv:2307.11078v1 (q-bio.NC). DOI: https://doi.org/10.48550/arXiv.2307.11078
(tagstotranslate) music