
A stunning discovery – AI interprets signals similar to how human brains process speech
New research conducted by the University of California, Berkeley, has discovered that artificial intelligence (AI) systems process signals in a way that is similar to how the brain interprets speech. Scientists believe that this finding could help explain the ‘black box’ of how AI systems function.
How the research was conducted
During the study, scientists from the Berkeley Speech and Computation Lab placed electrodes on participants’ heads and measured brain waves as they listened to the single syllable ‘bah.’ They then compared this brain activity to the signals produced by an AI system that was trained to learn English.
A graph comparing AI and brain waves side-by-side shows the striking similarity
Gasper Begus, Assistant Professor of Linguistics at UC Berkeley and lead author of the study published in Scientific Reports, stated that “the shapes are remarkably similar.” He added that “that tells you similar things get encoded, that processing is similar.” A graph comparing the two signals side-by-side shows the striking similarity.
Begus explained that “there are no tweaks to the data. This is raw.” Despite recent advancements in AI technology, scientists have had a limited understanding of how these tools operate between input and output. With this new research in hand, scientists can begin to better understand the internal workings of AI systems. These tools are predicted to revolutionize how millions of people work in the future.
The importance of understanding how ChatGPT really works
In the field of AI, a question and answer in ChatGPT has become a standard measure of an AI system’s intelligence and biases. However, the process between these steps has been somewhat of a “black box,” as detailed in this article from MIT Press’ Human Dynamics of Science and Technology Review: ttps://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8. As these systems continue to be integrated into daily life, from healthcare to education, understanding how they provide the information they do, and how they learn, becomes increasingly crucial.
Scientists, such as Begus and his co-authors, Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington, are working to unravel this “black box.” Begus utilized his background in linguistics to aid in this task.
How the human brain processes speech compared to AI
When we hear spoken words, the sound enters our ears and is transformed into electrical signals. These signals then travel through the brainstem and outer parts of the brain. In an electrode experiment, researchers were able to follow the path of these signals in response to 3,000 repetitions of a single sound and found that the brain waves for speech closely mirrored the actual sounds of language.
To further this research, the team transmitted the same recording of the sound “bah” through an unsupervised neural network, or AI system, that could interpret sound. Utilizing a technique developed in the Berkeley Speech and Computation Lab, they measured the coinciding waves and documented them as they occurred.
Previous research required additional steps to compare brain and machine waves. Studying the waves in their raw form will help researchers better understand how these systems learn and increasingly mirror human cognition, according to Begus.
“As a scientist, I’m particularly interested in the interpretability of these models. They are incredibly powerful and widely used, but less effort has been devoted to understanding them.”
Developing a deeper understanding of AI
Begus believes that what happens between input and output does not have to remain a mystery. Understanding how these signals compare to human brain activity is an important benchmark in the race to build increasingly powerful systems. It can also improve our understanding of how errors and bias are incorporated into learning processes.
Begus and his colleagues are collaborating with other researchers who use brain imaging techniques to measure how these signals compare. They are also studying how other languages, such as Mandarin, are decoded differently in the brain and what that might indicate about knowledge.
Many models are trained on visual cues, such as colors or text, which have thousands of variations at the granular level. Language, on the other hand, provides a more solid understanding, according to Begus.
For example, the English language has only a few dozen sounds.
“If you want to understand these models, you have to start with simple things. And speech is way easier to understand. I’m hopeful that speech is what will help us understand how these models are learning.”
In cognitive science, one of the primary goals is to build mathematical models that resemble humans as closely as possible. The newly discovered similarities in brain and AI waves are a benchmark for how close researchers are to meeting that goal.
Image Credits
In-Article Image Credits
AI waves closely resemble brain waves via University of California Berkeley with usage type - News Release MediaFeatured Image Credit
AI waves closely resemble brain waves via University of California Berkeley with usage type - News Release Media