Original fromquantamagazine.org , author: Kevin Hartnett, Chinese version starting in the micro-channel public number: Neural reality (ID: neureality), Translation: amequli, editor: Little Sunflower, titled from Stephan Schmitz

Human vision is a huge puzzle: the world’s lifelike portraits are presented in front of our eyes, and the brain’s visual system receives very limited information from the world itself. Most of what we “see” comes from our imagination.

“A lot of things that you think you see with your own eyes are from your own imagination.” New York University mathematician Yang Lizhen (Lai-Sang Young) means, “In fact, you didn’t see them.”

But the brain must be an expert in creating a visual world, otherwise we will often stumble. Unfortunately, relying solely on anatomy to reveal the brain’s mechanism for creating visual images is as difficult as staring at a car engine to try to break the laws of thermodynamics.

A new study shows that mathematics is the key to solving this problem. In the past few years, Yang Lizhen and colleagues at New York University, nervesRobert Shapley (Robert Shapley) and mathematician Logan Shalik (Logan Chariker), participated in an unprecedented collaboration. They are building a mathematical model that integrates the results of many years of biological experiments to illustrate how the brain reproduces the world’s appearance with limited visual information.

New York University mathematician Yang Lizhen (left) and neuroscientist Robert Shapley (right)

Source: Courtesy of NYU Photo Bureau; Paul Lee, Center for Neural Science, NYU

“In my opinion, the role of theorist is to grasp the whole world through piecemeal knowledge.” Yang Lizhen said, “The experiment itself does not explain how a mechanism works.”

Yang Lizhen and her partners are gradually building their models to incorporate basic visual elements one by one. They have successfully explained how neurons in the visual cortex interact to detect changes in the edges of objects and contrast. Currently, they are exploring the mechanism by which the brain senses the direction of movement of an object.

This study is still the first. In order to simulate human vision, earlier models made various wishful assumptions about the visual cortex. However, the models of Yang Lizhen, Shapley, and Shalik do not avoid the harsh and non-intuitive biological characteristics of the visual cortex, and on this basis understand the way visuals are formed.

“Their models are truly based on the true physiology of the brainStructure is a big breakthrough. They hope that this model is correct and feasible from a biological point of view. “Alessandra Angelucci”, a neuroscientist at the University of Utah, commented on

layer stacking

We don’t know anything about vision.

Our eyes are like lenses, accepting the light from the outside, and projecting the field of view on the retina behind the eyes in equal proportions. The retina is connected to the visual cortex at the back of the brain.

However, the connection between the retina and the visual cortex is very limited. In order to see a visual range of one-quarter full moon size, the brain only provides about 10 nerve cells in the region to transmit retinal information to the visual cortex. Many of these cells make up the LGN – lateral geniculate nucleus (“(lateral geniculate nucleus). It constitutes the only way for visual information to pass from the outside into the brain.

LGN cells are not only sparsely populated, but also have little effect. When they detect a change in the light and dark areas of the field of view, the LGN cells send a pulse to the visual cortex, and that’s it. The retina is bombarded with information from the bright world, but the brain can only rely on weak signals from very few LGN cells to make judgments. Reconstructing the visual world with very little information is like restoring the “Moby Dick” (Mody-Dick) A novel like this.

—Davidope.

“You might think that the brain captures everything we see intact.” Yang Lizhen pointed out, “But the real thing that captures the visual information is the retina, not the brain, and the retina transmits only piecemeal information to the visual cortex.

The next step is when the visual cortex shows its talents. Even though the visual cortex is connected to the retina with only a small number of neurons, the cortex itself has a large number of nerve cells. For every 10 LGN neurons connected to the retina, there are 4,000 visual cortical initial “input layer” neurons corresponding to it. In addition, there are more neurons in other areas of the visual cortex. This difference means that the brain will process a small amount of visual information it receives.

Shapley said: “The visual cortex seems to have its own thoughts.”

For researchers such as Yang Lizhen, Shapuli, and Xialike, the interpretation of the “thought” of the visual cortex is the challenge.

Vision Loop

The biological foundation of vision is thought-provoking. Its magic is like a small man who can’t help but wonder how it relies on limited clues to achieve powerful functions.

As early as Yang Lizhen, Shapley, and Xialike, some people tried to explain the vision with mathematical models, but they all presupposed the conditions that were more favorable to themselves, that is, the information transmitted by the retina to the visual cortex was more than the actual one. . In this way, the response of the visual cortex to the stimulus is easier to explain.

Shapley pointed out: “You are not clearly aware of the biological conditions implied by the computational model.”

Mathematicians have always been good at modeling phenomena that are full of change, from the movement of billiards to the evolution of time and space. These phenomena are all “dynamic systems” that follow a fixed pattern and change over time. The interaction between the excited neurons in the brain is also a dynamic system, but its effects are extremely subtle and difficult to describe with a precise set of rules.

A series of electrical pulses from LGN cells to the cortex are only one tenth of a volt for about 1 millisecond, but can cause waterfall-scale interactions between neurons. Yang Lizhen pointed out that the rules followed by these interactions are “unparalleledly complex” compared to the physical systems we are more familiar with.

Stephan Schmitz

Each neuron receives signals from hundreds of neurons simultaneously. Some of these signals cause it to be activated, while others inhibit its activity. When a neuron receives an electrical pulse from these or excitatory or inhibitory neurons, the voltage on its cell membrane fluctuates. Only when this voltage, the “membrane potential”, exceeds a certain threshold will the neurons be excited. We can’t predict when this will happen.

“If you look at the membrane potential of a single neuron, you will find that its ups and downs are very large.” Yang Lizhen said, “We have no way to judge when it will pulse.”

The actual situation is even more confusing. Remember those hundreds of neurons connected to a single neuron? They each receive signals from hundreds of other neurons. It is this layered, interlocking feedback loop that forms the visual cortex.

Shapley pointed out: “This structure has too many moving parts, which is the problem.”

Early visual cortical models often overlook this feature. The flow of default information for these models is single, from the front of the eye to the retina, to the cortex, and finally to the vision at the end, as clean as the manufacturing process of the parts on the assembly line. These “feedforward” models are easier to create, but they ignore the anatomical findings – the “feedback” loop must be indispensable for the cortex.

James Round

“It’s very difficult to simulate a feedback loop because the information is constantly reflowing, causing changes and impacts,” Yang said. “The feedback loop is ubiquitous in the brain, but very few people incorporate it into the model.”

Yang Lizhen, Shapley and Shalik published the first paper on this in 2016 and began to pay attention to these feedback loops. In their model, the feedback loop triggers a butterfly-like response: small changes in the signal from the LGN are amplified by multiple rounds of feedback loops. This process, also known as “repetitive activation,” can explain the dramatic changes in the visual representation of the model.

Yang Lizhen, Shapley and Shalik also proved that their models can use a large amount of feedback to reproduce the orientation of the edge of the object, whether vertical or horizontal, or even between the two, simply by small changes in the LGN input signal. The direction between.

“That is, a few neurons connected to other neurons are enough to identify all directions in the visual range,” Anger Lucy said.

Sincerely, edge detection is only a small part of the vision, and this 2016 paper is just a prelude. The next challenge is to incorporate more visual components into the model while ensuring the accuracy of this part of edge detection.

vision tide

In a lab environment, researchers tend to present simple visual stimuli to the primates—black and white patterns. The contrast and orientation of these black and white patterns are different. By implanting electrodes in the animal’s visual cortex, researchers are able to track nerve impulses in response to visual stimuli. A satisfactory model should be able to reproduce such nerve impulses in the face of the same visual stimulus.

“We learn from the behavior of primates about their response to certain images. Based on this information, we implement reverse engineering and reverse the working mechanism of the brain.” Yang Lizhen pointed out.

In 2018, the three researchers published their second paper, proving that their models can not only achieve edge detection, but also reproduce the overall pattern of pulse activity in the cortex without modifying the model. Gamma rhythm (A large number of fireflies show similar patterns when they glow together).

The third paper is under review, mainly explaining how the visual cortex senses contrast changes. Their explanation involves an important mechanism: Excitatory neurons strengthen each other’s activity, like the warm atmosphere of the dance, and become more and more as more people join. For the visual cortex, this mechanism is indispensable in order to create a complete image with sparse input.

James Round

At present, Yang Lizhen, Shapley and Xialike are working on incorporating directional sensitivity into their models to explain how the visual cortex reproduces the direction of movement of objects in the field of view. Next, they will try to understand how the visual cortex recognizes the temporal patterns involved in visual stimuli. For example, we can detect the flashing lights of traffic lights, but we won’t notice the picture-by-frame playback in the movie. This is exactly what scientists want to interpret.

Evenly, their model is very simple and can only explain the activity of one layer in the six-layer visual cortex, which is the layer that creates a rough outline of the visual impression. The other five layers are responsible for more complex visual processing, but cannot be explained by their models. This model also does not explain how the visual cortex distinguishes colors because it involves another completely different and more difficult neural pathway.

There is a long way to go, but their efforts cannot be ignored,” commented Ange Lucy. “This must be a difficult and time-consuming task.”

Although their model is far from enough to completely uncover the veil of vision, it has taken the right step. This is the first model to attempt to decipher visual passwords while respecting the principles of biology.

Cornell University neuroscientist Jonathan Victor (Jonathan Victor) said: “In the past, people have always been bluffing about biological principles. Proving that models and biology are truly compatible is a welcome event.”

Original fromquantamagazine.org , author: Kevin Hartnett, Chinese version starting in the micro-channel public number: Neural reality (ID: neureality), Translation: amequli, editor: Little Sunflower, titled from Stephan Schmitz