This article comes fromWeChat Official Account: Qubit (ID: QbitAI), author: Jin Lei, Xiao Xiao, from the title figure: electronic screen waves

Do you still remember the wave of SM Entertainment’s electronic screen that exploded a while ago?

Manually making such special effects may cost…Well, after all, it is called “every drop of water is money contributed by fans”.

But now, scientists such as DeepMind and Stanford have developed a graph network simulator—GNS framework. AI only needs to “look” at the fluid in the scene to simulate it.

Whether it is a fluid, a rigid solid or a deformable material, GNS can simulate vividly. The researchers also stated:

The GNS framework is by far the most accurate general learning physics simulator.

Moreover, this research was recently included in the top journal Science.

This is also reminiscent of the Tai Chi (Taichi) developed by Tsinghua Yao class graduate Hu Yuanming, which not only greatly reduces the threshold of CG special effects , The effect is very realistic.

In the work of DeepMind and Stanford University, Hu Yuanming’s Tai Chi still played a role.

They use Hu’s Tai Chi to generate 2D and 3D challenge scenes as one of the baseline effects.

How good is the effect? Science commented on social networks: “Hollywood may invest in this simulator.”


This is the picture in your impression

We humans, through “experience”, when speaking of a scene, we can quickly make up that kind of dynamic picture.

So, is the picture effect of AI “brain supplement” the same as you imagined?

First, the 3D effect of water falling into a glass container.

The physical effects are exactly the same as we imagined.

The baseline method on the left is called SPH (smoothed particle hydrodynamics), which is a particle-based fluid simulation method proposed in 1992 .

On the right, the result predicted by AI by “seeing” is the GNS method proposed by the researcher.

Look at the detailed differences between the two in slow motion.

It is not difficult to see that the details of the GNS method, such as splashes, are more fine-grained and closer to our impressions.

Of course, GNS can not only handle liquids, but also simulate objects in other states.

For example, granular sand.

There are also sticky objects.

The baseline method in the above two effects is MPM (material point method), proposed in 1995, suitable for interacting deformable materials .

Similarly, in the details of the particles scattered on the wall of the glass container, GNS’s prediction results are more in line with the effect of the real physical world.

So, how is such a realistic effect achieved?

Figure network simulator to simulate fluid


Traditional special effects calculation method

Previously, the simulation of real objects required a lot of calculations. The MPM mentioned above is one of them.

This method is called the material point method(Material Point Method), which separates a piece of material into a lot of particles and calculates the space Derivative and solve the momentum equation.

The improved MLS-MPM by Hu Yuanming and others has greatly improved the speed of simulating objects, which is about twice as fast as the original MPM.

In addition, a method called PBD can calculate and simulate the dynamic effect of a block floating on water;

In addition to these two methods, there is also a classic method called SPH, which is used to calculate 3D special effects that generate water.

Compared to these real scenes simulated by a large number of calculations, if they are trained with a neural network, can they simulate the effect of an object in the real scene, and the effect generated by these methods is very similar ?

Netizens are surprised by this idea. After all, the human brain’s simulation of the impact of fluids or objects is not calculated through a large number of mechanical calculations, but through a neural network.

In this idea, DeepMind uses GNS to train these generated models to simulate the special effects of objects in real scenes.


Picture network prediction object effects

The most fundamental principle of GNS simulating objects is to disperse a piece of object model X with a constant volume into many particles, and pass through a simulator sθ to transform it into its impacted form.

As can be seen from the figure below, the purpose of the simulator sθ is to input this piece of fluid into a dynamic model dθ, and use the resulting frame to update the process of object deformation.

As long as the simulator update time is fast enough, what we see is the appearance of this object being hit and deformed in the glass box.

△ The image on the right shows the effect generated by the simulator

The key is here, how to realize the dynamic model dθ?

The team adopted a “three-step” approach, dividing the model into three parts: encoder, processor, and decoder.

After an object passes through the encoder, the encoder will structure the particles originally scattered in the object to form an “invisible” image.

In the processor, the relationship between the particles in the graph will continue to change, and the transfer information learned by the graph network will iterate on the graph M times.

Finally, the decoder will extract the iterated dynamics information Y from the graph from the last iteration.

After feeding back to the object X, the particles in the object can be changed frame by frame, and the continuous form is the simulated liquid.

It can be seen that no matter what kind of object shape, the effect of GNS prediction is very close to the true value.


Innovation points

Compared with some previous neural networks that simulate liquids, the biggest improvement of GNS is that it converts different object types into a feature of the input vector.

You only need to distinguish different object types(such as sand, water, colloids, etc.) with different characteristics to show them status.

In contrast, the previous neural network-based liquid simulator called DLP was too complicated compared to GNS.

It is also to simulate various fluid models. DLP needs to continuously save the relative displacement between particles, and even needs to modify the model to meet different fluid types-the amount of calculation required is too large.

Not only that, the simulation effect of GNS is even better than that of DLP-based simulators.


More outstanding details

The following is a comparison between GNS and an enhanced CConv simulator based on DLP principles.

Compared with CConv, GNS is still very good in the simulation performance of different object types. The following figure shows the effect generated when the two jointly simulate a block floating on water.

It can be seen that the blocks generated by GNS are the same as true values, floating freely in the water; in contrast, the blocks generated by CConv are directly deformed under the impact of water(Battered by life).

If the mean square error (MSE) compared with the real value is used for comparison, in various object forms, GNS must Better effect than CConv.

In addition, the following figure shows the mean square error effect of GNS using Rollout and One-step algorithm strategies in reinforcement learning. (and the number of iterations, whether to share GN parameters, connection radius, trainingNoise amount, associated/independent encoder, etc.)

It can be seen that the effect of adopting Rollout (the lower part) is much better than adopting One-step in all aspects.

Not only that, the red part is the final strategy adopted by the GNS model. It can be seen that all strategies have minimized the mean square error.


Four digits in one work

This research is mainly a collaboration between DeepMind and Stanford University.

There are four co-authors of the thesis.

△Alvaro Sanchez-Gonzalez

Alvaro Sanchez-Gonzalez’s undergraduate and master’s majors are physics and computer science respectively. Based on this background,