“I haven’t seen such a good book worth watching for a while.”

The Translation Bureau is a compilation team that focuses on technology, business, workplace, life and other fields, focusing on foreign new technologies, new ideas, New winds.

Editor’s note: The impact of AI on the future of mankind is a hot topic, and debates around this topic are increasingly becoming tribal. Stuart Russell, co-author of AI’s best-selling book “Artificial Intelligence: A Modern Approach,” raised this question in the last chapter of the book and gave an answer that was less recognized. To this end, he used a new book, “Human Compatible,” to seriously answer this question again. This article is a review of this book by Ian Sample, originally published in the Guardian, titled: Human Compatible by Stuart Russell review – AI and our future

New Book

Making machines that are smarter than us is probably the most important event in the history of mankind, and the last thing

There is a problem here, and scientists may often ask: What if we succeed? In other words, what changes will happen to the world if we achieve our goals? Researchers hiding in the office and in the lab can see the future and make the best prospects for their work. But its unintended consequences and despicable abuse have become an afterthought—the piles of mess left behind are cleaned up by society.

Today, those chaos has spread everywhere: global warming, air pollution, plastic floating on the ocean, nuclear waste, and babies whose DNA has been poorly rewritten. All of this is the product of ingenious technology – the way technology solves old problems is to create new problems. In the race of the inevitable contender to become the initiator, the unfavorable factors were either not thoroughly studied or directly ignored or concealed.

In 1995, Stuart Russell wrote a book about AI. He and Peter Norvig’s “Artificial Intelligence: A Modern Approach (Artificial IntElligence: A Modern Approach) became one of the most popular course materials in the world (Norvig worked at NASA and joined Google in 2001). In the last few pages of the last chapter, the author himself raised a question: What if we succeed? However, their answers are hardly recognized. Their answer: “This trend doesn’t seem too negative.” But a lot has happened since then: First, Google and Facebook.

In the book “Human Compatible,” Russell returned to the question again, and this time he no longer flinched. The result is undoubtedly the most important book in AI this year. Perhaps, as Richard Brautigan’s poem says, life is beautiful when everything is watched by loving machines. But Russell, a professor at the University of California at Berkeley, saw a darker possibility. Making machines that transcend our wisdom will be the biggest event in human history. But he warned that this may also be the last incident of humanity. In the book, he presents compelling cases that demonstrate how we choose to control AI “probably the most important problem facing humanity.”

Russell’s timing is very good. Now, thousands of the world’s brightest minds are developing AI. Most AIs are only one skill–“narrow” AI, either dealing with speech, or translating languages, or identifying people, or diagnosing disease, or playing Go-Star to play Starcraft. However, these are still far from the ultimate goal of the field – the general AI that rivals or even surpasses the human brain.

This is not a ridiculous pursuit. From the beginning, DeepMind, the AI ​​team of Google’s parent company Alphabet, had to “solve the smart” problem and then use it to solve all other problems. In July, Microsoft signed a $1 billion contract with US company OpenAI to develop an AI that mimics the human brain. This is a high risk game. As Vladimir Putin said: Whoever becomes an AI leader will “become the ruler of the world.”

Russell did not claim that we were not far from that goal. In a chapter, he explains the huge problems computer engineers face when developing human-level AI. Machines must know how to turn words into consistent, reliable knowledge; they must learn how to find new movements and combine them in the proper order (boil water, pick up the cup, put the tea bag into the cup). Like us, they must manage their cognitive resources so they can make good decisions quickly. These are not the only obstacles. These alone can make us feel the arduous task of the future. Russell suspects that this will leave the researchers busyLived for 80 years, but he stressed that the point in time is unpredictable.

Even if the apocalypse is faint, this is also a tortuous journey of tact, we do not know where human intelligence will take us to where? Russell said that the machine that masters all of the above skills will be “an important decision maker in the real world.” It absorbs a wealth of information from the Internet, television, radio, satellite and closed-circuit television to gain a more complete picture of the world and the people who live there, something that no one can match.

What might be good in this situation? In the field of education, artificial intelligence instructors can maximize the potential of each child. They master the enormous complexity of the human body and let humans eliminate the disease. As digital personal assistants, they will make Siri and Alexa feel ashamed: “In fact, you can always summon a high-powered lawyer, accountant and political consultant to your side.”

What might be bad? Russell foresees a chaotic situation in the absence of significant advances in AI security and regulation, and his chapter on abusing AI is frustrating. Advanced AI will give the government the power to monitor, persuade, and control, and it will be as strong as “Stasi looks like an amateur.” Although the Terminator-style killer robot does not necessarily eliminate humans, it is entirely feasible to select and kill drones based on facial patterns, skin color or uniforms. As for work, we may not rely on physical or mental labor to make a living, but we can also provide our humanity. Russell pointed out: “In the future we need to become good at being a man.”

Is there worse than AI that destroys society? An AI that destroys society that cannot be closed. This is a terrible, seemingly ridiculous prospect, and Russell has invested a lot of time. The idea is that, like the HAL in the movie “2001 Space Roaming”, the smart machine will realize that if someone unplugs the plug, if someone pulls the plug, the goal will be difficult to achieve. If you give Super Smart AI a clear task – say coffee, the first step is to disable the shutdown switch. Russell believes that the answer depends on a completely new approach, so that AI has some questions about its goals, so it will never object to being turned off. He then advocated the development of “good and arguable” artificial intelligence, which can benefit human users mathematically and must be provable. It can be said that this is an ongoing work. How will my AI handle you?

One thing to be clear: There are many AI researchers who have fallen into this fear. After the philosopher Nick Bostrom emphasized the potential dangers of universal AI in “Super Intelligence” (2014), the American Think Tank Information Technology Innovation Foundation awarded its Lude Award “advocating The endangered world of artificial intelligence in the end of the world.” About AI securityThe dull debate is on the verge of tribalization, and this award is directed. The dangers here are more than the sudden extinction of species, but the inability to change towards decline: the loss of struggle and understanding, which erodes the foundation of civilization, makes us the passengers of the “machine-driven big ship, helpless and endless Wandering in the ground.”

Translator: boxi.