This article comes from the public media (ID:quanmeipai), the author Tencent Media, Ai Faner authorized to release.

“Brushing face” into the door, “by face” to eat… Today, when face recognition technology is maturing, these once imagined are infinitely close to reality. According to the forecast of the forward-looking industry research institute, the overall market for face recognition in China will grow rapidly in the next five years, and multi-industry applications will be realized. It is expected that the size of the Chinese face recognition market will exceed 5 billion yuan by 2021.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

▲At the Beijing Anbo Fair in 2018, visitors walked through a screen showing facial recognition software

However, while face recognition is running wild, the privacy issues involved in technology development are increasingly exposed, and the ills in the application are becoming more prominent.

Imagine that while you can do everything with a simple “brushing face”, your facial features will be foreverFar from being part of a network dataset, it may even be called by different organizations. Face recognition is not just about convenience, efficiency, and futuristic. At this stage, it should be more connected with ethics, censorship, and standardization. This issue of the full media (ID: quanmeipai) brings exclusive compilation, let us put down the imagination and embarrassment of this technology, and explore and reflect on it from the most realistic point of view.

How to delineate privacy boundaries when research data is stored permanently

When you walk into a coffee shop with 20 people, at least 22 cameras are on your side: one for each person’s cell phone and the other for hanging high in the corner. What you say may be eavesdropped or posted, and you may even appear in the background of another customer’s selfie or video conversation. But even the most privacy-conscious people will not refuse to enter the coffee shop because people can accept the risks inherent in entering public places.

It is this “reasonable” expectation of privacy that makes it easy for researchers who need to collect face recognition research in public. However, when the information collected in public places becomes a permanent and open data set, how to delineate the “reasonable” privacy boundary becomes a gray area where academic ethics has not yet been perfected.

Become a research object of the data set

Duke University, Stanford University, and Colorado State University Dormy’s scholars have used campus surveillance as a means of capturing research, but this behavior has caused strong opposition. Although people are psychologically prepared to be eavesdropped in a coffee shop, they have not thought that they will suddenly become research subjects. Moreover, the subjects that are entered will always be part of the data set.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

The Ethics Committee (IRB) approved these three research projects that use student data to improve machine learning algorithms. Duke University researcher Carlo Tomasi said in a statement from the Duke Chronicle that he “sincerely believes” that he followed the committee’s guidelines.

To conduct research, he and his colleagues posted posters at all entrances to the public area to tell people that they were being recorded, and if they wanted to delete their data, they could leave contact information. Tomasi also told the Chronicle that no one took the initiative to contact the research team to delete the data.

Limited regulatory and consequences of loss of control

At Colorado State University Dormy, the lead researcher said that although they tested the technology through student data, the team never collected personally-identified identifying information. In the independent statement, the universities reaffirmed that the IRB approved all research and emphasized its commitment to student privacy.

The problem is that the scope of supervision of university ethics committees is limited. They are mainly concerned with how the research is carried out, but they do not care about how the research will end and what impact it will have.

As explained by Michael Singer, chairman of the IRB Leadership Committee of Geisinger, bioethicist Michelle Meyer, the privacy concern of the IRB is whether the subjects are individually identified and relevant to them when conducting observational studies in public places. Identify whether they will expose them to substantial or physical harm. “In theory, if you are building a nuclear bomb that involves investigating or interviewing human subjects, the risk that the IRB considers will be the risk of the people directly involved in the project, not the potential risks of a nuclear explosion.”

It is worth noting that in the information age, most academic research relies on the Internet, and information on the Internet will always exist. Opening data sets for other researchers increases the associated risk, but the IRB does not have much jurisdiction in this regard. Because, fundamentally, data sharing is not the same thing as research itself, it is in “a strange gray regulatory zone,” Meyer explained.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

If the IRB is indifferent to the consequences of the study, then other researchers not subject to the IRB standard can download the dataset and follow theirWilling to use it at will. The object being studied is completely ignorant of the fact that he is being researched, which may lead to various negative consequences.

These consequences may be far beyond the imagination of the researchers. German anti-monitoring expert Adam Harvey has discovered more than 100 machine learning projects that reference Duke datasets worldwide. He created a map to track the global spread of this data set, just like the airplane’s route map, extending a long blue line from Duke to all directions, pointing to universities and start-ups around the world. Enterprises and research institutions, including China’s SenseTime and Megvii.

Every time a new project accesses a dataset, the impact and scope of the damage can change. The portability of data and the speed of the Internet have greatly expanded the possible boundaries of a research project and extended the risk beyond what any university can afford.

Resolve an attempt: Establish an academic review system

Duke University finally decided to delete the data set associated with the study. Stanford University removed data sets created by researchers based on San Francisco cafes.

Casey Fiesler, Assistant Professor of Information Science at the University of Colorado at Boulder, wrote an ethics on the use of public data in research. Fiesler proposed a system for reviewing dataset access, similar to reviewing copyrights. She pointed out that the terms of use in the system focus on how the requester plans to use the data.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

“It’s a good idea to set up a gatekeeper for these datasets,” she said. “Because the purpose of the usage is clear, the requester can access the dataset.” Similar rules are in the open source software and the Creative Commons standard copyright license agreement. There are already applications.

Creative Commons is a licensing-based system where requesters can only use the acquired work for non-commercial purposes and are liable once they conceal or distort the intent. These standards may not exactly match the academic environment, but are at least useful in cutting off subsequent injuries. “This is not to make the rules cumbersome, but it proposes a way to test the cause and effect before and after you decide what to do.Take it into consideration,” Fiesler said.

Racism and influence law enforcement, technical blind spots exposed in applications

When face recognition technology goes out of the lab and into real life, it is given more meaning – your face will no longer be just a face with biological attributes. In football, your face is money used to buy food at the stadium. In the mall, it is a ledger that tells the salesperson about your past purchases and shopping preferences. During the protest, your face will reflect the history of your arrest. Even in the morgue, the face can help the official organization to identify the remains.

As the meaning of face bearing continues to increase, the severity of the consequences of technical errors increases. In the current application, some of the drawbacks of face recognition technology have been exposed.

Differentiation in recognition accuracy and increased racial discrimination

2016 MIT research fellow Joy Buolamwini research shows that the accuracy of face recognition technology is better for men with lighter skin than for men with darker skin, but for women with darker skin. .

The American Citizens Association (ACLU) has also found similar problems. When ACLU matched members of Congress with the crime database, Amazon’s Rekognition software misrepresented black members more than white members, although blacks accounted for a smaller percentage of the overall number. These include House Speaker Elijah Cummings, a Baltimore.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

The difference in the accuracy of inter-ethnic recognition of different skin colors may exacerbate the accusations of racial discrimination, both Microsoft and Amazon claim that since the release of the MIT and ACLU reports, the company has optimized the technology to identify different races in accuracy. Sexual differences. However, more accurate identification of colored faces is only part of the technology’s need for improvement, because even fully accurate techniques can be used to support law enforcement measures that are harmful to people of color.

The improvement and improvement of technology itself is not the most important issue. How to use this technology should be paid more attention. The solution proposed by Microsoft and Amazon is to correct the problem of facial recognition after the application of technology, but this is only a remedy.

Face recognition law enforcement, feasibilityDoubt

In early May, the Washington Post reported that police were using facial recognition software to arrest suspects. An eyewitness described the appearance of the suspect to the police. The police then sketched the sketch to Amazon’s Rekognition and eventually arrested someone. The incident shocked the experts at the congressional hearings—the sketches submitted to the database could be used as a sufficient basis for arresting suspects.

Technology Blind Zone and Privacy Boundaries for Face Recognition: When your face becomes a big data nourishment, is your face okay?

In response, Amazon Web Services CEO Jassy said Amazon has never received a complaint about police misuse of technology. Just in May of this year, Amazon shareholders voted to veto a proposal to ban the sale of Rekognition to the police. A representative of Amazon said in a statement, “Amazon has never received any public complaints, and local agencies have no problems using Rekognition.”

Legislators & Manufacturers: Keeping “dangerous” face recognition in cages

To balance the rewards and risks of face recognition technology, Washington, Massachusetts, Oakland, and the US legislature have proposed a series of regulatory recommendations. The Republican and Democrats of the House Monitoring and Reform Commission held a few hours of hearings and said the two parties are willing to work together to regulate the technology.

Technical blind spots and privacy boundaries for face recognition: When your face becomes a big data nourishment, is your face okay?

Face API and Rekognition software manufacturers Microsoft and Amazon also support federal regulation. In June of this year, Axon, the number one portable camcorder manufacturer in the United States, agreed with its ethics committee’s recommendation not to equip Axon cameras with facial recognition devices.

Last year, Microsoft President Brad Smith called on governments to “legislate to regulate this technology.” Andy J, CEO of Amazon Web ServicesAssy said in June that he likened the technique to a knife. The executive from the world’s most powerful facial recognition technology company wants to say: This thing is dangerous.

But in calling for regulation, Microsoft and Amazon have taken a clever trick: They didn’t debate whether facial recognition should be widely used, but rather how the technology should be applied.

Amazon said in a statement published in the Atlantic Monthly that it is working with researchers, legislators and their clients to “know how best to balance the benefits and potential risks of facial recognition” and point out that Rekognition It has multiple uses, including combating human trafficking and finding missing persons. Microsoft reiterated Smith’s statement that it supports the establishment of facial recognition regulations, including the implementation of specifications for abuse, and the permission to obtain identified objects.

But some privacy experts think these companies have ulterior motives. Evan Selinger, a professor of philosophy at Rochester Institute of Technology, accused Microsoft and Amazon of trying to “contain strong regulation.” He believes that these companies strive to promote federal-level regulation because national laws often represent the bottom line, and they are less likely to limit how private companies use the technology than local laws.

At present, the problems of academic ethics and technical blind spots are gradually being solved, and the road of “mad running” of face recognition will not slow down. Before the wide application of face recognition technology, this may be the last time we have our own face.

One day, our face will no longer belong to us, but before that, our understanding of this technology determines whether we will be a beneficiary or a victim. When technology companies continue to narrow the scope of discussion and turn public governance into a service terms agreement, what we can do is not to be interested in the other party when setting the terms, but simply say: “I agree”.