For years, the brain has been thought of as a biological computer that processes information through traditional circuits, whereby data zips straight from one cell to another. While that model is still accurate, a new study led by Salk Professor Thomas Albright and Staff Scientist Sergei Gepshtein shows that there’s also a second, very different way that the brain parses information: through the interactions of waves of neural activity. The findings, published in Science Advances on April 22, 2022, help researchers better understand how the brain processes information.
“We now have a new understanding of how the computational machinery of the brain is working,” says Albright, the Conrad T. Prebys Chair in Vision Research and director of Salk’s Vision Center Laboratory. “The model helps explain how the brain’s underlying state can change, affecting people’s attention, focus, or ability to process information.”
Researchers have long known that waves of electrical activity exist in the brain, both during sleep and wakefulness. But the underlying theories as to how the brain processes information—particularly sensory information, like the sight of a light or the sound of a bell—have revolved around information being detected by specialized brain cells and then shuttled from one neuron to the next like a relay.
This traditional model of the brain, however, couldn’t explain how a single sensory cell can react so differently to the same thing under different conditions. A cell, for instance, might become activated in response to a quick flash of light when an animal is particularly alert, but will remain inactive in response to the same light if the animal’s attention is focused on something else.
Gepshtein likens the new understanding to wave-particle duality in physics and chemistry—the idea that light and matter have properties of both particles and waves. In some situations, light behaves as if it is a particle (also known as a photon). In other situations, it behaves as if it is a wave. Particles are confined to a specific location, and waves are distributed across many locations. Both views of light are needed to explain its complex behavior.
“The traditional view of brain function describes brain activity as an interaction of neurons. Since every neuron is confined to a specific location, this view is akin to the description of light as a particle,” says Gepshtein, director of Salk’s Collaboratory for Adaptive Sensory Technologies. “We’ve found that in some situations, brain activity is better described as interaction of waves, which is similar to the description of light as a wave. Both views are needed for understanding the brain.”
Some sensory cell properties observed in the past were not easy to explain given the “particle” approach to the brain. In the new study, the team observed the activity of 139 neurons in an animal model to better understand how the cells coordinated their response to visual information. In collaboration with physicist Sergey Savel’ev of Loughborough University, they created a mathematical framework to interpret the activity of neurons and to predict new phenomena.
The best way to explain how the neurons were behaving, they discovered, was through interaction of microscopic waves of activity rather than interaction of individual neurons. Rather than a flash of light activating specialized sensory cells, the researchers showed how it creates distributed patterns: waves of activity across many neighboring cells, with alternating peaks and troughs of activation—like ocean waves.
When these waves are being simultaneously generated in different places in the brain, they inevitably crash into one another. If two peaks of activity meet, they generate an even higher activity, while if a trough of low activity meets a peak, it might cancel it out. This process is called wave interference.
“When you’re out in the world, there are many, many inputs and so all these different waves are generated,” says Albright. “The net response of the brain to the world around you has to do with how all these waves interact.”
To test their mathematical model of how neural waves occur in the brain, the team designed an accompanying visual experiment. Two people were asked to detect a thin faint line (“probe”) located on a screen and flanked by other light patterns. How well the people performed this task, the researchers found, depended on where the probe was. The ability to detect the probe was elevated at some locations and depressed at other locations, forming a spatial wave predicted by the model.
“Your ability to see this probe at every location will depend on how neural waves superimpose at that location,” says Gepshtein, who is also a member of Salk’s Center for the Neurobiology of Vision. “And we’ve now proposed how the brain mediates that.”
The discovery of how neural waves interact is much more far-reaching than explaining this optical illusion. The researchers hypothesize that the same kinds of waves are being generated—and interacting with each other—in every part of the brain’s cortex, not just the part responsible for the analysis of visual information. That means waves generated by the brain itself, by subtle cues in the environment or internal moods, can change the waves generated by sensory inputs.
This may explain how the brain’s response to something can shift from day to day, the researchers say.
Source: Read Full Article