When the machine starts making decisions for us.
Editor’s note: This article is from “Brain Pole” (ID: unity007 ), the author of Tibetan fox.
Human is sometimes rational.
In order for robots to learn skills, it is common to tamper with them and fancy abuse.
Sometimes it is very emotional.
Although we also know that robots with copper arms do not feel pain because they are hit, but still hope to grow them in a more gentle way, scientists are trying to find them to watch YouTube videos. Can learn, or compete in a simulated environment…
In addition to empathy, there is a little wit that is raining ahead. What if the robot awakens one day and discovers that these “black history” abused by humans are “blackened”? The robot revolution in the movie is not all that come…
But, if it is not our own, but AI, what we are thinking about for humans?
When AI starts managing content, will it violate the rules and protect the same kind?
August 21, YouTube user Jamison Go recently received an official notification that the platform automatically deleted the video of his uploaded combat robot game because the algorithm detected the content to torture the animal or force the animal to fight . At the time, Robot Qiaopu was fighting another robot.
Jamison Go wrote on his Facebook account: “Today is a sad day. Robot lovers all over the world are making painful screams.”
He is not alone. Sarah Pohorec, the entrant of the last season’s combat robot competition, also suffered the same blow on YouTube, which quickly sparked the attention of robot-related content channels around the world. Many programs like BattleBots and RobotWars came out and accused YouTube of a new algorithm for detecting robotic battles as animal cruelty.
The reason why AI is confusing is because it is almost impossible for humans to recognize robots as animals when watching these videos. Humans and other creatures do not appear in the pictures. The deleted videos are not described, labeled or even The title refers to the name of the robot that is easily mistaken for the text of the living body. Moreover, YouTube itself has no explicit ban on robotic combat video. No, the next day, the human team re-examined and restored most of the videos that were accidentally deleted. It seems that the error is marked and deleted only by the algorithm itself.
Although things have come to an end, AI has begun to arbitrarily stop violence targeting robotic compatriots, which has caused many people to fall into the conspiracy theory of “machine awakening.” Some people even started to guess, is YouTube taken over by AI, let the algorithm make all decisions? Is the so-called “manual review” real?
Or, you never know when artificial intelligence will find another way to protect its robotic brothers.
AI is affectionate? It’s really just a moment of sight
So, from a technical point of view, is it wrong to accidentally delete a video by mistake, or is it self-protection by AI?
At the moment, the answer is of course the former. Because, in understanding the video, AI really is not as strong as everyone thinks.
In 2017, Google launched a video intelligence API that automatically recognizes objects and content in the video. This was a milestone application at the time, because YouTube, Facebook, Sina Weibo, Fast Hand and other platforms with video products were all troubled by bad content.
A Thai man broadcasts a video of killing his biological daughter and committing suicide on Facebook. He has been hanging on the website for nearly 24 hours and has played more than 250,000 times. However, the manual review team of nearly 5,000 people around the world is still unable to Instantly locate and remove these inappropriate content in the vast video stream.
Facebook has been repeatedly censored by the government for spreading bad information, and YouTube is also suffering from the business crisis brought about by video censorship. Because of the previous YouTube smart ad recommendation algorithm, Wal-Mart and Pepsi-Cola will beAdvertisers such as telecom operator Verizon distributed advertisements to videos that promote hatred and terrorism… The gold masters quickly voted with their feet, which made YouTube and even Google’s advertising network feel the pressure.
Although Google claims that these issues only affect “very, very, very few” videos, it is clear that only actions can dispel the concerns of users and advertisers.
So, when the “video intelligence” technology was released, Li Feifei, the chief scientist of Google Cloud Machine Learning and Artificial Intelligence, once described it like this – through video recognition technology, “we will start to give a dark substance to the digital starry sky.” Illuminate the light.”
Now, two years have passed, is the dark corner of online content really illuminated by AI? The results are certainly worthy of recognition. For example, with the continuous breakthrough of the new algorithm model, Google’s BERT training method can reduce the amount of manual labor examined from 12,000 hours to 80 hours.
But the same is true, the manual review teams for major content platforms are also expanding. Obviously, the introduction of the machine method did not help the platform to improve efficiency as expected. The video comprehension is still a flower of Gaoling that has not been taken off from the application level. Is it particularly difficult to locate it?
The first is behavior recognition in the real world.
The current video behavior recognition models are trained using motion-splitting datasets such as UCF101, ActivityNet, DeepMind’s Kinetics, Google’s AVA dataset, and more. Each video clip contains a clear action and is labeled with a clear and unique label. However, the videos in the real environment are not pre-segmented, sometimes contain complex scenes such as multiplayer actions, or contain complex emotions and intentions. These problems are more difficult to handle than face recognition. Therefore, the accuracy rate will decrease in practical applications.
For example, if a dog opens his mouth and opens a door with a person, it will be marked with the verb “open” and placed in the same category… From this perspective, The YouTube algorithm treats robotic battles as animal abuse and seems to fit its current “intelligence”.
It’s already very difficult to classify the behavior in the video. If you add timing, it will make the AI worry.
Because the current technology can be easily detected and segmented for objects in the image, the timing boundaries of biological behavior are often very unclear, when an action starts, when it ends, and the action changes. The magnitude is too large, etc., it is easy to make the algorithm “small”. On the one hand, it is necessary to solve the problem of sequential redundancy between a large number of consecutive frames, improve the detection speed; and to improve the “eye force”, and can accurately locate and identify in the case of motion blur, partial occlusion, etc., just before Google A new Q-learning learning adaptive alternation strategy is proposed, which is to find a balance between speed and accuracy. At least from this “algorithm accidental deletion”, this technical mountain also requires engineers to continue to climb up.
Another factor that affects video understanding technology is the cost dilemma. Compared with images, training video models need to add a lot of storage and computing resources, and have higher requirements for real-time performance. Therefore, it is more difficult to train than ordinary neural networks. At present, the main players on the track are Google, Facebook, Baidu, headlines and other giants. In the competition. If you want more developers to contribute to the advancement of technology, how to reduce the training burden becomes a work that cannot be ignored. At present, Google and Baidu have released video understanding algorithm models and labeled data sets through their own open source platforms. Some supporting policies on computing resources have also appeared, and they are forced to let developers “stay in the bag”…< Br>
So from a technical point of view, video comprehension is destined to be the accumulation of painstaking efforts of countless people, and it can accumulate energy that incites the whole industry. There is still a long way to go before the imaginary “AI robots hold the group”. .
Synthesis with smart machines, is human being prepared?
As long as you think about it, you can understand that the algorithm is just unintentional, after all, even if it is futureIt’s really possible to be a big brother to my robot, and I don’t have that business ability right now. Such a simple matter, why can it be triggered by a “master of rhythm” a hacker-style panic?
One of the reasons, I am afraid that most people don’t know much about the fighting program or competitive game of “Robot Fighting Platform”. I don’t know that the machine is improving the flexibility of complex environments and accidents through free confrontation. They continue to advance to the application in terms of hardware and intelligence.
For ordinary people, it is obvious that there is a group of robotic knives slashing and slashing, and it seems that there is no difference between playing with the Colosseum. It is easy to produce empathy effects and release their sympathy to the algorithm— – “When people look at them, they want to hit people. The AI is definitely more angry.” The researchers did not make headlines because of the punches and kicks of the robots. So this time, Maker’s Muse, who was affected by the YouTube download event, was behind the scenes. Angus Deveson, publicly publicized in a video of solidarity: “Combat robotics is an excellent tool for educating and demonstrating the charm of engineering,” with a view to getting more people to change their perceptions of robotics.
And another hidden worry is the social anxiety of “AI-oriented”.
Even today, even in remote countries like Africa, I am afraid that the whole society is not only composed of human beings, but also more and more machines are involved. From every move on Facebook to the living environment of the monkeys in the Amazon basin. Smart machines are becoming an indispensable medium between humans and humans, humans and society.
Today, things that humans are happy to give to AI are often things that they don’t want to do or are inefficient, such as reviewing web content that involves pornography. In the future, when the so-called “singularity”, that is, the wisdom of artificial intelligence surpasses the wisdom of human beings, when the real coming, the intelligent system will help human beings to assume the role of social administrators. At that time, why did humans reposition themselves in society? Location, can you choose your own? This identity anxiety does not have a clear solution.
Many researchers have told the public that “there must be a road to the front of the mountain.” People who are sitting in the car and looking forward are just as surprised as YouTube’s “AI-led” accident. Here, perhaps when we are chasing AI, we need to start solving two problems at an early date:
First, AI is so fastIs speed, public cognition education and ethics building in place?
Second, if not, when there is a conflict between the algorithm and the user (which is almost inevitable), how do technology companies feel at home in change?