The movie “Terrorist Boston” adapted from the real situation has such a plot. After the Boston bombings, the police began to browse a variety of surveillance videos in an attempt to find suspects who were different from ordinary people.

In the surveillance video, when the explosion occurred, a person turned his head in the opposite direction, and the police locked the suspect.

▲ The technology in the movie is always very high. Image from: Mission Impossible 4

In the movie, the police may only need to say “Zoom in, zoom in!” to the operator who controls the monitoring. The screen will clearly show the suspect’s big face, along with his life and family, family situation, Current address…

The reality is very difficult to say. Who knows that the surveillance camera that recorded the suspect was installed a few years ago? At this time, if you only say “zoom in, then zoom in!” then you can only see a block of evenly colored pixels. It is still too difficult to lock down criminal suspects by monitoring alone.

▲ Image from: “Threats of Boston”

The hardware can’t keep up, what about the software? Recently, the researchers have shown how AI might contribute to this, and it may help us restore the basic features of the face without distortion.

Researchers can reconstruct fuzzy, low-resolution face images with AI to make them sharper, higher resolution, and closer to real faces. This development comes from an area of ​​artificial intelligence research called “Face Super Resolution“, the field focuses on reconstructing closer to real faces from distorted or low resolution images.

At a recent machine learning conference, researchers at the Korea Advanced Science and Technology Research Institute published “gradually raising awareness of facial markers. The super-resolution of the face. In the paper, the researchers proposed a new face recognition method, which can generate 8 times super-resolution face images and completely preserve facial details.

▲ Pixel image, comparison of restored image and real image

In order to exercise AI, the researchers used a gradual approach to training. By performing stable training by dividing the network into successive steps, the resolution of the output of each step is gradually increasing. They also proposed a new method of facial attention loss that better restores facial attributes by increasing pixel differences and heat map values. In addition, the training AI uses a state-of-the-art face alignment network to extract heat maps for face SRs, thereby reducing training time.

Experimental results demonstrate that the researchers’ methods are superior to the most advanced methods in terms of qualitative, quantitative measurement, and perceived quality. With the power of artificial intelligence, it makes it much easier to identify a person from a pixelated initial image.

Of course, this is AI after all, or there are a lot of ridiculous results.

Twitter users @jonathanfly just blur our usual emoticons into pixel blocks to let AI complete the challenge, expressions The size of the packet is exactly 16 × 16 pixels. As a result, the result of AI restoration is a bit “horrible”. The nose of the pixelated cute wind has turned into a real face, which looks a bit funny and becomes a commonly used magic expression pack.

But after the pixelated photo of the real person, the normal picture is still close to the original.

If the contrast of the pixel block picture is adjusted, the slight deviation of the alignment face effect may cause the nose of the face to be reduced.

If you identify the pizza after it has been pixelated, the sausage will turn into a sexy red lips.

No matter how strange the image is, the five senses of human beings can grow.

After all, AI still relies on the things we teach it to do image reconstruction, so it’s not surprising to restore these biased faces. It’s not so perfect yet understandable.

Blog I Forced A Bot also tried The pixelated picture is adjusted from 16×16 to 128×128 and then down to 16×16. Through such processing, he usually gets a result that is closer to the real face. Because the image is more blurred, it provides more creative space for the AI’s restoration work.

I Forced A Bot also found details that were not published in a paper. Some generated face images have black unclear faces on their faces, and bloggers have turned them into “Halibo.”Special scars.”

To this point, this AI has been tried by people to make a variety of expression packs, and it seems to have been completely “played bad”.

But we still have to admit that as long as you don’t use AI for emoticon restore, AI that focuses on fuzzy image restoration still has a very positive effect. If the image of the suspect captured in a case is too vague, then AI is likely to become the last link to present the suspect’s appearance.

However, the existing technology cannot be used for criminal investigations, and we can only wait for it to return and then serve humanity.

The title map is from Pixel Art Maker.