In addition to the technical game, there are still many important paths worth exploring.

Editor’s note: This article is from WeChat public account “Tencent Institute” (ID: Cyberlawrc), author S Jun. ai.

Why do we love and hate deepfake technology?

What exactly is Deefake? Even if you don’t know enough about the hot words that have appeared frequently in the recent AI world, you will be familiar with “AI Face Change”. From the spoof Obama and Galgado from abroad, to the domestic Zhu Yin face Yang Mi, and the ZAO App’s short-lived. This brain-opening technology was proposed and open sourced by Reddit website user “deepfakes” in 2017, and it was fried in the forum. Video synthesis tools such as FakeApp and a series of pseudo-contrasts were derived.

This forgery technique derived from the artificial intelligence adversarial network (GAN) can replace the original portrait in the original video with another face image. Based on the game optimization principle of GAN algorithm, the forged video with extremely high fidelity is finally generated.

Why do we love and hate deepfake technology?

On the one hand, the application of deepfake technology in the film and television culture industry is enormous. On the other hand, the spoof and pornographic film’s sensation of human nature has been accompanied by the disputes of portrait rights, copyrights and ethics. . What are the threats of abuse of deepfake? The wind of fraud has flourished, and the anti-counterfeiting army has gradually emerged. Playing AI with AI has become an “arms race.” And will we win?

What does Deepfake abuse mean to us?

■ A recent study shows that there are currently 14,678 DeepFake videos online, 96% of which are porn videos. Most of them are the faces of famous actresses and are transferred to the body of porn stars. (Deep Trace Lab) Actress Scarlett, one of the main goals, said: “This is not so big for me because people know that people in porn videos are not me… but for those It may be different for people who may lose their jobs.” (Heart of the Machine)

For ordinary people or women with low visibility, deepfake technology makes it easy to fake pornographic videos. Porn videos based on retaliation or other purposes may expose women to higher reputational risks and make it difficult to defend themselves.

■ The innovation of technology has also made the “fraud industry” constantly change its face. Based on deepfake’s synthetic portraits, synthesized speech and even synthetic handwriting, fraudulent activities become more secretive and difficult to detect and defend.

In March of this year, criminals successfully imitated the voice of a German parent company’s German parent company, deceiving a number of colleagues and partners, and defrauded 220,000 euros (about 1.73 million yuan) in one day. (Deeptech Deep Technology) In June, spies used AI to generate a non-existent image and information, and deceived many contacts including political experts and government insiders on the workplace social platform LinkedIn. (New Witness)

■ In addition to the security risks that have erupted, the potential effects of Deepfake will spread to the public’s level of information acquisition and social trust.

“If an information consumer doesn’t know what to believe, they can’t distinguish facts from fiction, then they either believe in everything or don’t believe anything. If they don’t believe anything, it will lead to long-term indifference. This is harmful to the United States.” (Foreign Policy Research Fellow Clint Watts/Xinzhiyuan)

Is AI against AI, would it be a good solution?

■ As Professor Li Wei of the Chinese Society of Science and Technology says, the real problem with deepfake is that “the boundaries between “real” and “false” in the traditional sense will be broken.” Since it is possible to use technology to falsify, can you use more powerful techniques to detect fake video? This kind of AI’s thinking against AI has become the focus of many institutions in the past two years.

New York State University professor Siwei Lyu and students found that fake faces generated using AI technology rarely or even blink because they are trained using blinking photos. The US Department of Defense research agency DAPRA has developed the first “anti-face-changing” AI forensic detection tool. (New Witness) The team where Hao Li is located does this by tracking each person’s unique facial expressions. These markers (micro-expressions) are called “soft biometrics”, which are too subtle for AI and are not yet mimicked. (heart of the machine)

■ However, both Liyu and Li believe that this technology may not be useful for a long time. “Manually adding blinks in the post-processing of fake video is not a huge challenge.” As the authentication technology improves, the quality of fake video will be further improved. Develop this algorithm, “at least help to prevent and delay the process of creating fake video.” (Siwei Lyu NetEase)

The principle of the generated confrontation network is to let the two sets of neural networks learn from each other. In the long run, the two are always in constant confrontation, and no one can completely defeat them. (爱范儿)

■ Even with the current highly effective detection technology, it is difficult to capture all the fraud information perfectly. Delip Rao, vice president of research at the Artificial Intelligence Foundation, said, “The recently announced deepfake detection algorithm is said to be 97% accurate. But given the size of the Internet platform, the remaining 3% is still damaging. Suppose Facebook is everyday. To handle 350 million images, even a 3% error rate will result in a large number of misrecognition images being released.” (Delip Rao/Heart Heart)

■ Another problem is that the scale and volume of “anti-counterfeiting research” and “fake research” are disproportionate. “In 2018, together with the world, only 25 papers on the recognition of synthetic images were published.

Compared, GAN has 902 articles. Count it, 1 to 36. (Quantum) For this, giant companies such as Facebook and Google have begun to adjust their thinking, adopt bonus contests, build data sets, etc., and hope to gather the crowd to fill this gap. In September, Facebook announced a partnership with several companies and universities to launch the Deepfake Test Challenge. (cnBeta)

Can this scaled action help the anti-counterfeiting technology leap? We still have to wait.

In addition to AI counter-measures, what other countermeasures?

■ Without clarifying the consequences of the proliferation of technology, the rational release of technological achievements has become the choice of some enterprises. For example, the unsupervised language model GPT-2 launched by OpenAI some time ago did not open source according to industry practice. Only the simplified version was released. The data set, training code and model weight were not released. The purpose was to avoid “this technology was maliciously exploited.” “.” (brain pole body)

■ Hwang believes that the most likely solution is to balance the automatic detection tool (which can scan millions of videos) and manual review (which can focus on more difficult cases). . For example, journalists, fact checkers, and researchers can collect supporting evidence for video content. Especially useful for specially polished deepfake works. (Forwarding Network)

■ Both Virginia and California have tried at the legislative level of deepfake technology. In May of this year, the second draft of the draft of the Civil Code of Personal Rights of the Civil Code proposed that no organization or individual may infringe on the portrait rights of others by means of ugly, defaced, or by means of information technology. “If it is officially adopted, this means that even if there is no profit-making purpose and subjective malice, it is equally possible to change the face without the consent of the AI.Infringement. (Xue Jun / Xinhuanet, Vice President of Peking University Law School)

■ “In my opinion, the most important thing is that the public must be aware of the power of modern technology in video generation and editing. This will allow them to think more critically about the video content they consume every day. Especially if there is no proof of source.” (Stanford University Visiting Assistant Professor Michael Zollhofer / New Wit)

Editor Summary

As the creator of this technology, “deepfakes,” says: Any technology can be exploited by evil motives. Shortly after the birth of Deepfake, it triggered challenges such as the proliferation of pornographic videos, more secretive means of fraud, and even identification and social trust. If we want to embrace this wonderful technology that will allow Paul Walker to resurrect in Speed ​​and Passion 7, it should be more actively involved in efforts to prevent technology abuse.

At present, relying on technical checks and balances and developing good AI algorithms to detect false content is still the most feasible solution. Although this path does not lead to a 100% success rate, and it faces a long-lasting “arms race” caused by the update of counterfeit technology. The competition has shifted from the original large and small organizations to the giant companies to use bonus contests, build data sets, etc., to encourage wider attention and participation.

But outside the technical game, there are still many important paths worth exploring. For example: how to manually check how to intelligently participate in technology detection, play a role of four or two? Should such sensitive technologies be open source limited before being able to navigate? How does the supporting regulatory policy not hinder the positive development of technology?

Back to the original question: The emergence of deepfake will have an impact on how the public defines the truth. Therefore, the fight against technology abuse is not just a matter of a small number of people such as industry and regulation; when this issue gets wider attention, people will have stronger immunity to fraud, more critical thinking and absorption, and social The risk of “illusion” has the strongest foundation.