This article is from the WeChat official account:neural reality (ID: neureality)< span class = "text-remarks">, author: Daphne L-Ringuet, Translator: Benny Cui, from FIG title: “Overweight”

Using only a turbo-accelerated graphics card (turbocharged GPU), a research team successfully simulated part of the monkey’s brain-usually this The simulation requires a powerful and expensive supercomputer to complete, and scientists now claim that a desktop computer is sufficient.

The experiment was completed by researchers from the University of Sussex (University of Sussex). They simulated millions of Neurons and billions of neural connections, and this only costs an ordinary computer equipped with the latest graphics processor (GPU).

Although graphics processors have long been used to speed up the operation of AI models, it is the first time that models of this scale can be simulated on an ordinary gaming device-you are likely to be Most gamers can find the same machine in their bedroom. Researchers have developed a new method that effectively simulates the visual cortex of macaques, which contains millions of synapses, which could only be done by supercomputers.

– Coen Pohl-

Most of this type of simulation relies on the huge amount of storage provided by supercomputer systems, but these scientists have developed a more efficient technique-“Procedural Connection”(procedural connectivity), this technology can greatly reduce the data that needs to be stored during simulation. This research was published in “Nature-Computer Science”.

Simulation of the brain usually requires the use of spiking neural network(spiking neural network), which is a special AI system whose neural network The neurons communicate with each other through a certain sequence of pulses, thereby simulating the activity of the brain.

In order to accurately predict how impulses in a neural network affect neurons, before the model runs, it is usually first to generate and store some information to record which neurons are connected by synapses and the strength of the synapses. However, because neurons only fire intermittently, it is very inefficient to store such a large amount of data in the storage space.

The “procedural connection” can generate neuron connection information only when needed without suspending the running program, and does not need to be accessed in memory. In this way, the computer does not need to consume memory at all to store the data of the neuron connection.

James Knight (James Knight) and Thomas Nowotny (Thomas Nowotny) The University of Sussex, who studied with him

“Generally, these experiments require you to generate the information of the neuron connection group in advance, and then save it in the memory, but our method tries to avoid this process.” James Knight(James Knight) is the co-author of this study. He is a researcher in the Department of Computer Science at the University of Sussex. “Using our method, every time a neuron sends out a pulse, The detailed information of a connection is regenerated. “

He continued: “We can use the GPU’s computing power to recalculate the connection without interrupting the program, and we can perform calculations at every pulse release.”

With the powerful computing power of GPU, the pulse neural network can keep the program running while generating connection group data when the nerve pulse is triggered.

– Josh Patterson-

This method is based on the research proposed by the American researcher Eugene Izhikevich(Eugene Izhikevich) in 2006. At that time, the speed of computers was too slow, and this idea was not widely applicable. However, today’s GPUs are 2000 times faster than 15 years ago, so according to James Knight, they can perfectly meet the requirements of spiking neural networks.

In fact, their results are not inferior to supercomputers, but even faster. This model only took 8.4 minutes to simulate one second of biological resting state—compared to the simulation results of previous supercomputers, such as the 2018 IBM Blue Gene/Q supercomputer, the time was shortened by 35%.

Knight explained that this is because the IBM machine is a network of 1000 computing nodes in a room. “No matter how sophisticated the system is, there is always a delay between nodes. So the more you expand your Models, the slower it is. Our models can be orders of magnitude faster than them.”

This article is from the WeChat official account:neural reality (ID: neureality)< span class = "text-remarks">, author: Daphne L-Ringuet, Translator: Benny Cui