This article is from WeChat public account:Tencent Institute (ID:cyberlawrc), of:. S Jun ai

What exactly is Deefake? Even if you don’t know enough about the hot words that have appeared frequently in the recent AI world, you are no stranger to “AI face change”. From the spoof Obama and Galgado from abroad, to the domestic Zhu Yin face Yang Mi, and the ZAO App’s short-lived. This brain-opening technology was proposed and open sourced by Reddit website user “deepfakes” in 2017, and it was fried in the forum. Video synthesis tools such as FakeApp and a series of pseudo-contrasts were derived.

This forgery technique derived from artificial intelligence generation against the network (GAN) can achieve another face The picture replaces the original portrait in the original video. Based on the game optimization principle of GAN algorithm, the forged video with extremely high fidelity is finally generated.

On the one hand, the application of deepfake technology in the film and television culture industry is enormous. On the other hand, the spoof and pornographic film’s sensation of human nature has been accompanied by the disputes of portrait rights, copyrights and ethics. . What are the threats of abuse of deepfake? The wind of fraud is flourishing, and the anti-counterfeiting regiment is graduallyBorn. Playing AI with AI has become an “arms race.” And will we win?

I. What does Deepfake abuse mean to us?

A recent study shows that there are currently 14,678 DeepFake videos online, 96% of which are porn videos. Most of them are the faces of famous actresses and are transferred to the body of porn stars. (Machine Heart )

For ordinary people or women with low visibility, deepfake technology makes it easy to fake pornographic videos. Porn videos based on retaliation or other purposes may expose women to higher reputational risks and make it difficult to defend themselves.

Technology innovation has also made the “fraud industry” constantly changing. Deepfake-based composite portraits, synthesized speech and even synthetic handwriting make fraudulent activity more secretive and difficult to detect and defend.

In March of this year, criminals succeeded in imitating the voice of a German parent company’s German parent company, deceiving a number of colleagues and partners, and defrauding 220,000 euros a day (about 1.73 million yuan). (Deeptech Deep Technology) In June, the spy used AI to generate a non-existent image and information that was deceived on the workplace social platform LinkedIn. Many contacts include political experts and government insiders. (New Witness)

In addition to the security risks that have exploded, the potential effects of Deepfake will spread to the public’s level of information acquisition and social trust.

“If an information consumer doesn’t know what to believe, they can’t distinguish facts from fiction, then they either believe in everything or don’t believe anything. If they don’t believe anything, it will lead to long-term indifference. This is harmful to the United States.”(Foreign Policy Research Institute researcher Clint Watts/Xinzhiyuan)

Second, AI against AI, would it be a good solution?

As Professor Li Wei of the Chinese Academy of Science and Technology Law said, the real problem with deepfake is that “the boundaries between “real” and “false” in the traditional sense will be broken.” Since it is possible to use technology to falsify, can you use more powerful techniques to detect fake video? This kind of AI’s thinking against AI has become the focus of many institutions in the past two years.

New York State University professor Siwei Lyu and students have found that fake faces generated using AI technology are rare or even blinking because they are trained using blinking photos. of. The US Department of Defense research agency DAPRA has developed the first “anti-face” AI criminal investigation testing tool. (New Witness)Hao Li’s team does this by tracking each person’s unique facial expressions. These tags (micro-expressions) are called “soft biometrics”, which are too subtle for AI and are not yet mimicked. (Heart Heart)

But either Lyu or Li thinks that this technology may not be useful for a long time. “Manually adding blinks in the post-processing of fake video is not a huge challenge.” As the authentication technology improves, the quality of fake video will be further improved. Develop this algorithm, “at least help to prevent and delay the process of creating fake video.” (Siwei Lyu NetEase)Generation of the principle of confrontational networks It is to let the two sets of neural networks learn from each other in the game. In the long run, the two are always in constant confrontation.No one can beat anyone completely. (爱范儿)

Even the current highly effective detection technology, it is difficult to capture all the fraud information perfectly. Delip Rao, research vice president of the Artificial Intelligence Foundation, said, “The recently announced deepfake detection algorithm is said to be 97% accurate. But given the size of the Internet platform, the remaining 3% is still damaging. Suppose Facebook is everyday. To handle 350 million images, even a 3% error rate will result in a large number of misidentified images being released.”(Delip Rao/machine heart)

Another problem is that the scale and volume of “anti-counterfeiting research” and “fake research” are disproportionate. “In 2018, together with the world, there were only 25 papers on the recognition of synthetic images. In contrast, GAN has 902. Count it, 1 to 36.” (Quantity) For this, giant companies such as Facebook and Google have begun to adjust their thinking, adopt bonus contests, build data sets, etc., and hope to gather the crowd to fill this gap. In September, Facebook announced a partnership with several companies and universities to launch the Deepfake Test Challenge. (cnBeta)

Can this scaled action help the anti-counterfeiting technology leap? We still have to wait.

Third, in addition to AI counter-measure, what other strategies?

Under the premise of the consequences of the proliferation of technology, the rational release of technological achievements has become the choice of some enterprises. For example, the unsupervised language model GPT-2 launched by OpenAI some time ago did not open source according to industry practice. Only the simplified version was released. The data set, training code and model weight were not released. The purpose was to avoid “this technology was maliciously exploited.” “. (Brain Pole)

Hwang believes that there is a fish in the net due to AI counterfeiting.Possible solutions are in the automated detection tool (can scan millions of videos) and manually review (can focus on more difficult cases). For example, journalists, fact checkers, and researchers can collect supporting evidence for video content. Especially useful for specially polished deepfake works. (Forwarding)

Virginia and California have tried at the legislative level of deepfake technology. In May of this year, the second draft of the draft of the Civil Code of Personal Rights of the Civil Code proposed that no organization or individual may infringe on the portrait rights of others by means of ugly, defaced, or by means of information technology. “If it is officially passed, this means that even if there is no profit-making purpose and subjective malice, the AI ​​face change without the consent of the person may also constitute an infringement.”(Peking University Law Associate Dean of the College Xue Jun/Xinhuanet)

” In my opinion, the most important thing is that the public must be aware of the power of modern technology in video generation and editing. This will make them think more critically about the video content they consume every day, especially In the absence of proof of origin.”(Stanford University Visiting Assistant Professor Michael Zollhofer/Xinzhiyuan)

Editor Summary

As the creator of this technology, “deepfakes,” says: Any technology can be exploited by evil motives. Shortly after the birth of Deepfake, it triggered challenges such as the proliferation of pornographic videos, more secretive means of fraud, and even identification and social trust. If we want to embrace this wonderful technology that will allow Paul Walker to resurrect in Speed ​​and Passion 7, it should be more actively involved in efforts to prevent technology abuse.

At present, relying on technical checks and balances and developing good AI algorithms to detect false content is still the most feasible solution. Although this path does not lead to a 100% success rate, and faces a prolonged “arms race” situation brought about by the renovation of counterfeiting technology. The competition has shifted from the original large and small organizations to the giant companies to use bonus contests, build data sets, etc., to encourage wider attention and participation.

But outside the technical game, there are still many important paths worth exploring. For example: how to manually check how to intelligently participate in technology detection, play a role of four or two? Should such sensitive technologies be open source limited before being able to navigate? How does the supporting regulatory policy not hinder the positive development of technology?

Back to the original question: The emergence of deepfake will have an impact on how the public defines the truth. Therefore, the fight against technology abuse is not just a matter of a small number of people such as industry and regulation; when this issue gets wider attention, people will have stronger immunity to fraud, more critical thinking and absorption, and social The risk of “illusion” has the most solid foundation.

public channel number from the micro herein: Tencent Research Institute (ID: cyberlawrc), of:. S Jun ai