Smarter, faster, and more expensive.

The Translation Bureau is a subordinate translation team, focusing on science and technology, business, workplace, life and other fields, focusing on introducing new foreign technologies, new perspectives, and new trends.

Editor’s note: This summer, OpenAI launched a new computer system called GPT-3. In terms of natural language processing, GPT-3 has demonstrated amazing capabilities. It can write articles, do translations, generate codes, and even learn a person’s language model and follow this model to talk to people. However, GPT-3 also has certain shortcomings and needs to be gradually improved in the future. This article is the second part. The first part mainly introduces the functions and features of GPT-3, and the second part will introduce the defects of GPT-3 and the future development direction. This article is translated from New York Times, author Cade Metz, the original title is “My Name Is GPT-3 and I Approved This Article”, I hope to inspire you.

This summer, an artificial intelligence laboratory called OpenAI in San Francisco announced a technology that has been in the making for several months, namely a new system called GPT-3. This system learns the details of natural language by analyzing the text in thousands of e-books, Wikipedia, and nearly 1 trillion words posted on blogs, social media, and the Internet.

GPT-3 undetected defects

In the mid-1960s, Joseph Weizenbaum, a researcher at the Massachusetts Institute of Technology, built an automated psychotherapist he called ELIZA. From the perspective of 2020, this chatbot is very simple.

Unlike GPT-3, ELIZA did not learn from prose. It operates according to some basic rules defined by the designer. It is basically repeating what you said to it, just in the form of a question. But what surprised Weissenbaum even more is that many people treat robots like humans, ask their questions without reservation, and get comfort from the machine’s response.

When dogs and other animals exhibit a little bit of human-like behavior, we tend to think that they are imitating us rather than their real behavior. The same goes for machines. Colin Allen, a professor at the University of Pittsburgh who studies the cognitive skills of animals and machines, said: “People are sucked in,” he said, “even if they know I was sucked in.”

This is GPT-3a part of. Because it can generate convincing tweets, blog posts, and computer code, we interpret human nature into this digital system, rather than paying so much attention to its limitations.

In practice, the number of system failures is similar to the number of successes. We ignore that the computer code it writes requires some fine-tuning by human programmers-delete a line here, add a line there. We didn’t notice that after a few exchanges, its dialogue talent would become invalid because it could not “remember” what it said a few seconds ago. We also did not fully realize that although this system generated a convincing blog post for Liam Porr, it was Porr’s proposed title, image and first few sentences, and deleted Some sentences are not convincing.

Boll believes that GPT-3 will not pose a huge threat to the fight against false information in the short term, because it still needs a lot of human help. A tool like this will only become truly dangerous if it can generate a lot of convincing false information on its own, and the amount of false information far exceeds the amount that can be accomplished by hiring a team today.

Similarly, when the application designer asked Singer about whether GPT-3 was a threat to his career, he assured them, at least not yet. He thinks this is a way to make their work easier. He said: “If GPT-3 can achieve 70% of the goal, it means saving a lot of tedious work.”

What we don’t know is how much this technology will improve in the coming months and years.

smarter, faster, and more expensive

When OpenAI researchers used more than 1 trillion words posted on the Internet to train GPT-3, they conducted a second experiment using tens of thousands of digital photos to train a similar system. The system can analyze all these photos and learn how to construct images, just like GPT-3 constructs paragraphs. Given half of a photo of a cat, it can generate the rest of the cat.

Some researchers believe that this experiment shows that such a system can finally handle tasks that span multiple dimensions, such as language, vision, and sound, just like humans. They said that even if they only train languages, the system can already enter other fields, whether it’s computer programming, chess or guitar labeling.

However, continuing to improve this technology is not a trivial matter. Processing all this Internet data requires a dedicated supercomputer to run continuously for several months, which is an extremely expensive task. When asked whether such a project would cost millions of dollars, OpenAI CEO Sam Altman said that the cost is actually “higher” and may reach tens of millions of dollars.

Amodei, vice president of research at OpenAI, said that this technology still has room for improvement and requires more powerful processing capabilities to analyze more data. But he also said that the “power” of this approach may be about to run out.

But at least, GPT-3 is a new tool for artificial intelligence researchers and entrepreneurs, a way to build a variety of new technologies and new products. Computer programmer Wrigley (Wrigley) recently quit his daily job and founded a company called LearnFromAnyone, aiming to use GPT-3 to build an automated tutor that can pretend to be from the scientist Douglas Hofstad Germany (Douglas Hofstadter) to venture capitalist Peter·Teel (Peter Thiel) and others. Some start-ups are dedicated to automatically generating code for computer programmers, and automatically composing promotional emails and tweets for marketing professionals.

But it is not yet clear how effective these services will ultimately be. If GPT-3 can generate correct text only half the time, will it satisfy professionals? It is unclear whether this technology can become a real dialogue machine, let alone a real intelligent system. Amodei said that if you want to make more progress on the long road to be able to imitate the human brain, new ideas are needed. “It’s a bit like a chemical reaction,” he said. “We have a raw material, but we need other ingredients.”

Translator: Jane

Recommended reading: GPT-3, the strongest AI model for natural language processing: How many possibilities are there in the future? (Top)