This article comes from the public media (ID:quanmeipai), author Tencent Media, Ai Faner was authorized to release.

Recently, artificial intelligence hot search: Facebook hits the pain of modern people “what to wear today”, launches Fashion++, and adjusts clothing through algorithms; American writer Andrew Kaplan will use dialogue AI technology and digital assistant equipment Realizing “Eternal Life” on the cloud; ZAO uses deepfake technology to change the face of the video, “to fake the truth”…

From AI to face to AI try on, then to AI to help “digital eternal life”… Today, artificial intelligence is infiltrating into our lives in all directions, important and not negligible.

How did artificial intelligence come today? In this issue, the whole media (ID:quanmeipai) compiles the Fast Company article exclusively, and counts more than 70 years related to the development of artificial intelligence, to see how these highlights and low tides can promote artificial intelligence to “evolve” and change the world of human life.

Isac Asimov proposes “Three Laws of Robots” (1942)

In 1942, Isaac Asimov published the short story “Runaround” (also translated as “Circle Dance”). For the first time, the famous science fiction writer completely explained his “three laws of robots”:

The first law: Robots must not harm humans or cause humans to be harmed by inaction.

The second law: Robots must obey human commands unless they violate the first law.

The third law: The robot must protect itself without violating the first and second laws.

“Circle” tells a robot called Speedy, which accepts human commands and goes to a dangerous selenium tank to perform acquisition tasks. As it gets closer and closer to the destination, the danger is getting higher and higher, and the third law makes it have to leave to protect itself; but when it begins to move away from the destination, the second law makes it obey the command. Therefore, it was placed in a contradiction between the two sides, and the circle of selenium was continuously circled around the selenium.

Artificial Intelligence Seventy Years | AI Top Ten Milestones: Light and Shadow on Both Sides, Going Forward

▲ On the Mercury, two astronauts are looking for the enemy who will keep turning circles

Asimov’s “Robot” series of stories has attracted many sci-fi fans, and some of them have begun to think about the possibility of machines having the ability to think. Until now, many people still use Asimov’s three laws to carry out artificial intelligence exercises.

Alan Turing proposes an imitation game (1950)

In 1950, Alan Turing wrote: “I propose to consider a question – can the machine think?”

This sentence is the beginning of its groundbreaking research paper “Computers and Intelligence.” The paper proposes a model for thinking about machine intelligence. He asked, if a machine can imitate human conscious behavior, is it not conscious?

Artificial Intelligence Seventy Years | AI Top Ten Milestones: Light and Shadow Double-sided, Going Forward

▲ Alan Turing first proposed a benchmark for judging machine consciousness in 1950

Inspired by theoretical issues, Turing’s classic “imitation game” was born. The game sets up three characters, people, machines, and human “inquirers.” The “inquirer” needs to be physically separated from the rest. The “inquirer” initiates a question and distinguishes between the machine and the person based on the plain text response (to avoid interference from the voice answer). If a machine can communicate with humans (Note: Turing believes that the ideal situation is to use Teleprinter, the “teletypewriter”), and let the “inquirer” difficult to distinguish between the human and the machine, then this machine is considered to have intelligent.

In the Turing era, no machine was able to pass such tests untilNot today. But his tests provide a simple standard for distinguishing whether a machine has intelligence. It helped shape the philosophy of artificial intelligence.

Dartmouth Holds Artificial Intelligence Conference (1956)

In 1955, scientists around the world began to think about conceptual issues such as neural networks and natural language, but there was no unified concept to generalize these areas related to machine intelligence.

John McCarthy, a mathematics professor at Dartmouth College, coined the term “artificial intelligence” to cover all of this.

A team led by McCarthy applied for funding and held an artificial intelligence conference in the second year. In the summer of 1956, they invited many top researchers to attend the conference in the Temmouth Auditorium. Scientists have discussed many potential areas of development in artificial intelligence research, including learning and search, vision, reasoning, language and cognition, games (especially chess), and human-computer interaction (such as personal robots).

Artificial Intelligence Seventy Years | AI Top Ten Milestones: Light and Shadow Double-sided, Accompanied

The general consensus reached in this discussion is that Intelligence has great potential for the benefit of mankind. They came up with an overall framework of “the field of research in which machine intelligence can have an impact.” This conference standardized and promoted the development of artificial intelligence as a research discipline for many years.

Frank Rosenblatt created the Perceptron (1957)

The basic structure of a neural network is called a Perceptron, which is equivalent to a node. It receives a series of inputs and performs calculations, classifying them and classifying confidence levels. For example, “input” may analyze different parts of a picture and “vote” whether there is a face in the image. The node will calculate the voting behavior and confidence level and draw conclusions. Today, artificial neural networks running on powerful computers connect billions of such structures.

But before the emergence of a powerful computer, the perceptron already exists. In the late 1950s, a young psychologist, Frank Rosenblatt, built a machine for a perceptron called Mark I.model.

Artificial Intelligence Seventy Years | AI Top Ten Milestones: Light and Shadow on Both Sides, Going Forward

▲ Frank Rosenblatt has established a “neural network” at Cornell Aviation Labs

This machine is designed for image recognition. It is an analog neural network in which the matrix of photosensitive cells is connected to the nodes by wires. Rosenblatt developed a “perceptron algorithm” that guides the network to gradually adjust its input intensity until they always correctly identify the image, effectively allowing it to learn.

At the time, Rosenblatt was funded by the US Navy and held a press conference. The New York Times seized the point of the conference: “The Navy revealed the prototype of an electronic computer, hoping that it can walk, speak, write, read, self-replicate and realize its existence in the future.”

Today, this earliest sensor is stored in the Smithsonian in the United States.

Until the 1980s, scientists were still discussing the issues related to perceptrons. This is very important for creating physical entities of neural networks, and before that, neural networks were primarily an academic concept.

The first winter of artificial intelligence (1970s)

Artificial intelligence has invested most of its history in research. For most of the 1960s, government agencies such as the Defense Advanced Research Projects Agency (DARPA) invested heavily in research, but did not require much for the final return. At the same time, in order to ensure adequate funding, artificial intelligence scholars often exaggerate their research prospects. It all changed in the late 1960s and early 1970s.

In 1966, the Language Automatic Processing Advisory Committee (ALPAC) submitted a report to the US government; in 1973, the British Scientific Research Council (SRC) submitted a draft to the British government by the well-known applied mathematician Sir James Lighthill. Report. Both reports questioned the actual progress in various areas of artificial intelligence research, and their attitude towards the technical outlook was also very pessimistic. Lighthill reports that artificial intelligence for tasks such as speech recognition is difficult to scaleTo a scale that is useful to the government or the military.

Artificial Intelligence Seventy Years | AI Top Ten Milestones: Light and Shadow on Both Sides, Going Forward

▲The debate between AI advocates and opponents James Lighthill recorded by the BBC in 1973

As a result, both the US government and the UK government have begun to cut funding for university artificial intelligence research. For most of the 1960s, DARPA has been generously providing research funding for artificial intelligence. Today, DARPA requires research programs to have a clear timeline and a detailed description of project outcomes.

The artificial intelligence of the time seemed to be disappointing, and its ability may never reach the level of humanity. The first “winter” of artificial intelligence lasted until the 1970s and continued to spread to the 1980s.

Artificial Intelligence welcomes the second winter (1987)

The development of artificial intelligence in the 1980s began with the development and success of Expert Systems.

The expert system is a computer program system that simulates human experts to solve problems in the field. The system stores a large amount of domain knowledge and imitates human experts to make decisions.

This system was originally developed by Carnegie Mellon University for Digital Equipment Corporation, which quickly adopted the technology.

But expert systems require expensive dedicated hardware support, and there is a problem: At the time, Sun Microsystems’ workstations, Apple and IBM’s personal computers all had similar capabilities, but at a lower price. In 1987, the market for expert system computers collapsed and major suppliers left the scene.

In the early 1980s, the boom in the expert system allowed DARPA to increase its investment in artificial intelligence research. But then the situation changed again, and DAPRA cut off most of the funding for other artificial intelligence projects, except for a few selected projects.

The term “artificial intelligence” has once again become a taboo in the field of research. In order to avoid being regarded as unrealistic and eager