This article is from the WeChat public account: cultural aspect (ID: whzh_21bcr) , author: CHEN Xiao, FIG title from: IC photo

Introduction

Since the “Alpha Dog Incident”, artificial intelligence has become a hot word. In fact, it has been 70 years since the birth of artificial intelligence, and there have been three development climaxes. However, China has only participated in the third wave of development, and the focus on artificial intelligence has been continuously amplified in just a few years. Therefore, the society generally lacks a mature understanding of it.

Many people think that in the near future, artificial intelligence technology will surpass a certain critical point, and then exponentially exceed human capabilities. There are also opinions that the existing artificial intelligence technology is only “artificially handicapped”, and there are as many intelligences as there are artificial beings, but they are not actually intelligent. These misunderstandings are the main ideological obstacles to the development of artificial intelligence in China.

Professor Xiaoping Chen of the University of Science and Technology of China, who has long been engaged in the cross-research on artificial intelligence and robotics, analyzes the artificial intelligence technology achievements in the past 70 years, analyzes how artificial intelligence works and how intelligent it is, and puts forward the understanding of artificial intelligence. Closure Guidelines. ” He believes that in closed scenarios, we can not only avoid the risk of artificial intelligence technology out of control, but also promote the existing artificial intelligence technology to play a key role in China’s industrial upgrading in the next 10-15 years, bringing new horizons for industrial development. space.

Closed scene: the industrialization path of artificial intelligence

At present, the discussion on artificial intelligence technology in the society can be described as divergent opinions. Some people think that artificial intelligence technology has or will soon surpass human ability level, and it can be applied unconditionally, which will also cause a serious ethical crisis. Some people think that the existing artificial intelligence technology is just “”Artificial mental retardation”, “there is as much intelligence as there are artificial beings”, so it cannot be applied, and there is no ethical risk at all. But if based on the former view, the development of artificial intelligence will be restricted from now on, or based on the latter It is unwise to completely abandon the supervision of the ethical risks of artificial intelligence.

This article is based on a summary of the technological achievements of artificial intelligence in the past 70 years. Based on the understanding of the technical nature of existing artificial intelligence achievements, it proposes guidelines for the closedness and strong closure of artificial intelligence to form a new type of artificial intelligence observation. Perspective, which leads to the following observations:

First, the existing artificial intelligence technology can be applied on a large scale in scenarios that meet the strong closedness criterion, but it is difficult to obtain successful applications in scenarios that do not meet this criterion;

Second, subject to the strong closure criterion, there is no risk of artificial intelligence technology out of control in the short term, and long-term risks in the future are also controllable;

Third, within the effective range of the strong closure criterion, the main risks of artificial intelligence come from technical misuse and management mistakes. Policies deviating from the essence of artificial intelligence technology will make it difficult to avoid the regulatory dilemma of “death with one tube and chaos with one tube”.

The urgent need for artificial intelligence applications and governance

Artificial intelligence has a history of about 70 years so far, and there have been three waves, each of which lasted about 20 years. Some people also attribute the past artificial intelligence technology to two generations, and each generation has experienced 30 to 40 years of development. Since the window period for this round of industrial upgrade is only 10-15 years, and it takes decades for a new generation of technology to be born and mature, the artificial intelligence technology that this round of industrial upgrade relies on will mainly be the engineering of existing artificial intelligence technology. Landing, not waiting for the next generation of new technology to mature.

So, the following questions are sharply presented to the whole society: In 10-15 years, can and how can the existing artificial intelligence technology play a key role in China’s industrial upgrading? If we cannot answer this question from the nature of the existing artificial intelligence technology, the national strategy of artificial intelligence will inevitably fail, and the industrial upgrading related to it will also be greatly affected.

In the western developed countries, the first three waves of artificial intelligence have caused widespread concern. Therefore, the understanding of artificial intelligence in all sectors of society is long-term, and it is easier to form more objective views. However, in our country, because the third wave of artificial intelligence is generally concerned in society, and this concern has been amplified in just a few years, the truth about artificial intelligence technology is widespread.Not enough understanding, and even mistaken foreign film and television works as a reality.

The experts and scholars in the field of artificial intelligence in China rarely participate in social discussions, and rarely participate in ethical risk research and policy formulation. Therefore, if the relevant policy recommendations do not accurately reflect the nature of artificial intelligence technology, application conditions and development trends, it will inevitably imply a huge risk of management errors.

Technical progress of the three waves of artificial intelligence

Artificial intelligence research has formed at least thousands of different technical routes. Among them, two are the most successful and influential. They are called the two classic thinking of artificial intelligence: “model-based violence” and “based on Meta-model training. ” Although these two kinds of thinking cannot represent the whole of artificial intelligence, they are no longer staying at the level of a single technology, but have risen to the level of “machine thinking”. Therefore, they play a key role in recent applications and are most worthy of attention.

The first classic thinking of artificial intelligence is the “model-based violence method”. Its basic design principles are: first, construct an accurate model of the problem; second, establish a knowledge representation or state space to express the model, Make inference or search computationally feasible. Third, in the knowledge representation or state space above, use inference or search to exhaust all options and find a solution to the problem. Therefore, the violence method includes two main implementation methods: inference method and search method. They have the same basic premise: there is a well-defined and accurate symbol model for the problem to be solved.

In the reasoning method, logical formalization, probabilistic formalization, or decision theory formalization are usually adopted as the means of knowledge expression. Taking logical formalization as an example, an AI inference system consists of a knowledge base and an inference engine. An inference engine is a computer program that performs inference. It is often developed by a professional team for a long time. The knowledge base needs to be developed by different applications. Developers develop their own. The inference engine makes inferences based on the knowledge in the knowledge base and answers questions.

The development of the inference engine based on the formal logic system is based on the “fidelity” of the corresponding logic, so the inference engine itself is “provable”- As long as the knowledge base used by the inference engine is “correct” “”, For any question within the effective range of the knowledge base, the answer given by the inference engine is correct. However, the “correctness” of a knowledge base and its adequacy with respect to an application field have not yet formed a recognized and operable standard, and can only be tested experimentally through testing.

The second classic thinking of artificial intelligence is “training method based on metamodel”, its basicThe design principles are: first, establish a meta-model of the problem; second, refer to the meta-model, collect training data and manually label, select a suitable artificial neural network structure and a supervised learning algorithm; third, fit according to the data The principle is to use the supervised learning algorithm to train the connection weights of the artificial neural network with labeled data, so that the total output error of the network is minimized.

The trained artificial neural network can quickly calculate the corresponding output for any input, and achieve a certain accuracy. For example, for a given image library, some trained deep neural networks can classify the input picture and output the types of objects in the picture. The classification accuracy has exceeded humans. However, the training method is currently not provable or even interpretable.

In the training method, only supervised learning algorithms and labeled data are not enough. It is also necessary to manually select learning goals, evaluation criteria, test methods, and test tools. This article brings together these artificial choices and summarizes them with a “metamodel.” Therefore, the training method is by no means only required to have training data and training algorithms, and the claim that artificial intelligence has the ability of “self-learning” independent of human beings is even more unfounded.

Both the training method and the violence method have a “fragility” problem: if the input is not within the coverage of the knowledge base or the trained artificial neural network, it will produce an erroneous output. Aimed at the ubiquitous perceived noise in practical applications, MIT did a test.

First use a well-known commercial machine learning system to train a deep neural network, which can identify various firearms from photos and achieve a high correct recognition rate. Then, a small number of pixels (representing perceived noise) on these photos were artificially modified. These modifications did not have any effect on human eye recognition, but were trained Deep neural networks do not correctly recognize the modified photos, and bizarre errors can occur. Since the 1980s, vulnerability has become a major bottleneck restricting the successful application of existing artificial intelligence technologies.

Besides vulnerability, there are other shortcomings in violence and training. In engineering, the main shortcomings of the training method are manual labeling of a large amount of raw data, which is time-consuming and labor-intensive, and it is difficult to guarantee the quality of the labeling; the main shortcomings of the violence method are manual compilation of a knowledge base or development of a search space. These two tasks It is very difficult for most developers. Therefore, trying to complement each other with violence and training methods to eliminate or reduce their respective shortcomings has been a research topic of artificial intelligence.

AlphaGo Zero uses four artificial intelligence technologies, including two brute force techniques-a simplified decision theory model and a Monte Carlo tree search. These two techniques are used for blogging (Play chess with yourself) , automatically generate training data and annotations, and not only played many chess that humans have played, but also played many chess that humans have not played.

The other two are training techniques—residual network and reinforcement learning. The reinforcement learning algorithm trains the residual network with all the training data and annotations generated from the blog, and continuously improves the residual network to finally train a network. , Its level of playing chess far exceeds humans. It also shows that it is a huge misunderstanding to think that AlphaGo Zero is only a victory for deep learning. It is precisely because of the combination of violence and training that AlphaGo Zero does not require manual annotation and human Go knowledge. (except rules) .

According to the rules, Go can play a total of about 10 300 different rounds. AlphaGo Zero played 29 million games (less than 10 to the power of 8) through 40 days of self-blog, just exploring all the Go games A very small part, so AlphaGo Zero has a huge room for improvement. This shows that within the effective working range of the existing artificial intelligence technology, the capabilities of artificial intelligence systems have far surpassed that of human beings. The “how many artificial and many intelligent” argument is unfounded and inconsistent with the facts.

The above analysis shows that the two extreme statements popular in society are not valid. So, what is the true capability of existing artificial intelligence technology?

Ability boundary of existing artificial intelligence technology-closedness

Some people think that Go is the hardest problem. Since AlphaGo surpasses humans on the hardest problem, of course, artificial intelligence has surpassed humans in all respects. But in fact, for artificial intelligence, Go is the easiest kind of problem. Not only are there more difficult problems than Go, but in these problems, the existing artificial intelligence technology is far from human. Ability level.

Therefore, we need some kind of criteria in order to objectively judge: which applications can be solved by existing artificial intelligence technology and which problems areCannot be resolved. This criterion is closedness. For the sake of understanding, here is given a description of the closedness as popular as possible.

An application scenario is closed, if one of the following two conditions is met: (1) there is a computable and semantically complete model, and all questions are within the solvable range of the model; (2) there are limited A deterministic metamodel, and the representative data set is also limited.

Closed conditions (1) and (2) are for violence and training, respectively. If an application scenario does not meet any of the requirements (1) or (2), then the application of the scenario cannot be solved by the violence method or the training method. For example, suppose a scenario has a computable and semantically complete model, but some questions are not within the solvable range of the model, then there is no guarantee that the intelligent system’s answers to these questions are correct. Sex.

Therefore, the closedness gives the theoretically necessary conditions that applications in a scene can be solved by violence or training methods, that is, it is impossible to use existing artificial intelligence technology in applications that do not meet these conditions. Realized. However, the actual scene is often very complicated, and there is a certain distance between the theoretically necessary conditions and the actual engineering. For example, when using the training method for image classification, the classification error recognition rate is not guaranteed to be zero, and the nature of the error may be very serious and cannot meet the needs of users. In order to reduce the distance between theory and practice as much as possible, this paper introduces the strong closure criterion as follows.

A scene is strongly closed if all of the following conditions are met: (1) the scene is closed; (2) the scene is non-fatal in error, that is, the failure of the intelligent system applied to the scene is not fatal (3) The maturity of basic conditions, that is, the requirements contained in closedness are actually met in this application scenario.

The maturity of the basic conditions contains a lot of content. Here are two important typical situations.

The first case is that a model that meets the requirements exists in theory and cannot be constructed in engineering. Condition (1) in the closedness criterion requires that there is a computable and semantically complete model, and the so-called “existence” here only needs to be theoretically established. But for a specific engineering project, it is not enough to have such a model in theory. It is necessary to be able to actually build such a model within the construction period required by the project.

However, some scenarios are too complicated to actually build its model within the project deadline. Therefore, although such a scenario meets the closedness criterion, it cannot succeed in project implementation. Maturity requirements for basic conditions: during project construction periodThe required model can be actually built inside, so the strong closure criterion reflects the feasibility of the project.

The second case is that representative data sets exist in theory and cannot be obtained in engineering. Condition (2) of the closedness criterion requires that a representative data set for a complex problem is guaranteed to be found, even if such a representative data set can theoretically be proven to exist. Therefore, training methods are currently mainly used in scenarios where environmental changes are negligible or controllable, because representative data sets are available in such scenarios. “Environmental change can be ignored or controlled” is a specific requirement of the strong closure criterion, which is not included in the closure criterion.

When the above two situations occur in an application scenario, how to deal with it to meet the strong closure criterion? For most companies, especially small and medium-sized enterprises, the most effective way is to cut the scene, such as reducing the size of the scene, discarding parts that are difficult to model in the scene, The cropped scene complies with the strong closedness criterion.

In addition, artificial intelligence technology often plays the role of finishing touch in practical applications, rather than solving all technical problems of an industry on its own. Therefore, it is usually the case that other conditions are already in place, but the expected engineering goals are still not achieved, and artificial intelligence technology is introduced to overcome the difficulties and play a key role. This is also one of the requirements for maturity of basic conditions. For example, the informatization and automation of traditional manufacturing industries and the implementation of large-scale high-standard farmland have provided important and decisive basic conditions for the intelligentization of traditional manufacturing industries and modern agriculture in China, respectively.

The path of the existing artificial intelligence technology in the real economy

In the real economy, especially in the manufacturing industry, the natural form of a large number of scenes is very complicated, and it is difficult to make them conform to the strong closure criterion through scene clipping. In view of this situation, you can take the approach of scene transformation. At present, there are at least the following three scene transformation strategies, which can be used as the landing path of existing artificial intelligence technology in the real economy.

The first landing path: closed. The specific method is to transform a non-closed scene in natural form, so that the transformed scene has a strong closed nature. Scenario modification is common and successful in manufacturing. For example, in the automobile manufacturing industry, the original production process was manually operated, which contained a large amount of uncertainty, not a closed scenario.

The essence of building an automotive production line is to buildEstablish a physical three-dimensional coordinate system, so that everything that appears during the production process (such as body, parts, robots and other equipment) are in this coordinate system It is precisely positioned and the error is controlled below the sub-millimeter level, so the non-closed scene is completely transformed into a closed (this modification is called “structure in industry” “) , so all kinds of intelligent equipment and automation equipment can run automatically and complete production tasks independently. This closed / structured strategy is increasingly being used in other industries, and the degree of intelligence is increasing.

The second landing path: divide and conquer. Some complex production processes are difficult to be closed at one time, but some links can be decomposed from the entire production process, and these links can be closed to meet the criteria of strong closure; Continue to maintain the traditional production mode, and each link is connected by a mobile robot. This strategy has been adopted by large companies such as Audi and is actually applicable to smaller companies.

The third landing path: Quasi-closed. In the service industry and human-machine collaboration, there are a lot of scenarios that cannot be completely closed. At this time, you can consider adopting a “quasi-closed” strategy: completely close the parts of the application scenario that may cause fatal errors Partial semi-enclosed without fatal errors.

Take an example of the transportation industry. The driving part of the high-speed rail system is closed. Passengers’ activities are not required to be closed. They can move freely in compliance with relevant regulations. For many scenarios in the service industry, as long as the non-lethal conditions of failure are met, the degree of closure can be relaxed, because under appropriate conditions, people in these scenarios can make up for the shortcomings of artificial intelligence systems.

Thus, the strong-closedness criterion does not simply require a scene to meet the criterion in its natural form, but points out a target direction, and through scene clipping or scene modification, as long as the cropped / reconstructed scene meets the strong-closedness criterion , You can apply existing artificial intelligence technology in this scenario to achieve industrial upgrading.

Does not meet the criteria of strong closure (including the inability to meet the criteria through scene cropping or scene modification) There are also a large number of scenes. Intelligent technology is difficult to put into practical use in these scenarios. A typical example is man-machine dialogue in the open field. Since the question set for this conversation is not limitedIt is impossible to collect and label all representative question data, nor to write enough rules to describe questions or corresponding answers, so it is impossible to fully realize human-machine dialogue in the open field with existing artificial intelligence technology.

It is particularly noteworthy that at present, artificial intelligence applications at home and abroad do not fully reflect the strong closure criterion. The specific manifestations are: on the one hand, the application scenarios that do not meet the strong closure criterion in the natural form are selected; Perform adequate scene cropping or scene modification. Therefore, the reality of artificial intelligence applications is not optimistic.

Recently, foreign media have started to notice the unsuccessful development of artificial intelligence startups, but only reported the phenomenon without analyzing the underlying reasons. The observation of this article is straightforward: the reason why the artificial intelligence technology fails to land smoothly is not that the existing artificial intelligence technology does not have application potential, but because these landing projects have not passed sufficient scene clipping or scene transformation to ensure compliance with the strong closedness criterion. Claim.

Risk analysis of artificial intelligence

AI technology has both positive and negative effects. While benefiting human beings, there are also various risks. There may be four types of risks in theory: out of control technology, misuse of technology, application risk, and mismanagement. Analyzing these risks from the perspective of closedness criteria can lead to more realistic observations. A detailed analysis of the four risks is outlined below.

Risk 1: Technology is out of control. Technology out of control means that the development of technology has surpassed the ability of humans to control it, and even humans are controlled by technology. This is the risk that most people worry about most. The above analysis shows that the existing artificial intelligence technology can only exert its powerful functions under the condition of meeting the strong closedness criterion; in non-closed scenarios, the existing artificial intelligence technology is far less capable than humans, and the real world Most of the scenes are non-closed.

Therefore, there is no risk of technical out-of-control at present, and in the future, as long as the following three points are achieved in accordance with closedness criteria, technical out-of-control can still be avoided. First, in the closed transformation, not only the industrial or commercial needs are considered, but also the controllability of the scene after the transformation. This consideration should not be limited to a single scene, but should be formulated in batches through industry standards. implement. Second, in the research and development of new artificial intelligence technologies suitable for non-closed scenarios, not only the technical performance, but also the ethical risks and controllability of new technologies are considered. Third, in the research and development of new artificial intelligence new technologies with special needs, not only the satisfaction of special needs is considered, but also the ethical risks and application conditions of new technologies are considered, and the practical application of these technologies is strictly controlled.

Risk 2: Technical misuse. The misuse of information technology-related technologies includes data privacy issues, security issues, and fairness issues. The application of artificial intelligence technologies can amplify the severity of these issues, and may also generate new types of technological misuse. Under the existing conditions, artificial intelligence technology itself is neutral, and whether misuse occurs or not depends entirely on the use of the technology.

Thus, the importance of artificial intelligence technology misuse and risk prevention should be put on the agenda. It is worth noting that according to the closedness criterion, the existing artificial intelligence technology is only effective in closed scenes. At least in theory, there is a way to deal with the misuse of technology in this scene, so it should be actively responded without fear . Moreover, the application of existing technologies such as automatic verification can eliminate or mitigate the risk of misuse of certain technologies.

Risk 3: Application risk. Application risk refers to the possibility of negative social consequences caused by the application of technology. What people are most worried about now is that the widespread application of artificial intelligence in certain industries has led to a large number of job losses. Application risk is caused by the application of technology, so the key lies in controlling the application. According to the principle of strong closedness, the application of artificial intelligence technology in the real economy often requires the help of scene transformation, and the scene transformation is completely under the control of human beings. Doing more or less depends on relevant industrial decisions. Therefore, under strong closed conditions, application risk is controllable; it also means that industrial decision-making and related risk prediction are the focus of application risk prevention.

Risk 4: Management errors. Artificial intelligence is a new technology, and its application is a new thing. The society lacks management experience, and it is easy to fall into a situation of “death with one tube and chaos with one tube”. To this end, it is even more necessary to understand the technical nature and technical conditions of the current achievements of artificial intelligence to ensure the pertinence and effectiveness of regulatory measures. The closed standards characterize the capability boundaries of existing artificial intelligence technologies, thereby providing a basis for the formulation of relevant governance measures.

Similarly, when the future artificial intelligence technology surpasses the condition of strong closedness, then human beings will need some new criteria to grasp the essence of future artificial intelligence technology. (such as Closure Guidelines 2.0) . It should also be noted that the ethics of artificial intelligence is not a simple risk management issue, but a complete ethics system that integrates supervision and development needs to be built.

The above analysis shows that the closed criterion helps us form a more specific, clear, and realistic understanding of various risks. The three main observations are summarized below. First, there is no risk of technical runaway in the short term;In terms of risk, attention should be paid to new technologies applicable to non-closed scenarios, and the strong closed standards provide preliminary guidance to ensure the controllability of the risks of this technology. Second, technology misuse and management mistakes are the main sources of risk at present, and we should focus our attention and strengthen research. Third, application risks have not yet emerged, and the possibility, shape and coping strategies of the future need to be studied and judged early.

Conclusion

This article believes that there are three misunderstandings about artificial intelligence:

The first misunderstanding: artificial intelligence is omnipotent, so existing artificial intelligence technologies can be applied unconditionally. According to the strong closedness criterion, the existing artificial intelligence technology is far from omnipotent, and its application is conditional. Therefore, in industrial applications, it is urgent to strengthen the understanding of the strong closedness criterion, strengthen scene clipping and scene transformation, and avoid the blind application of violations of the strong closedness criterion. Such blindness is now widespread at home and abroad, not only wasting resources, What’s more serious is the interference with promising applications.

The second kind of misunderstanding: the existing artificial intelligence technology cannot be applied on a large scale, because the existing artificial intelligence technology relies on manual annotation and is not intelligent. This article points out that the existing artificial intelligence technology is not limited to deep learning, and the combination of violence and training can avoid manual labeling, and application scenarios that meet the strong closedness criterion can effectively implement data collection and manual labeling. . Some applications are currently unsuccessful because they violate the principle of strong closure, not because existing artificial intelligence technologies cannot be applied. This misunderstanding often occurs when you have a certain understanding of artificial intelligence technology but do not realize it. Like the first misunderstanding, this misunderstanding will seriously affect the progress of China’s artificial intelligence industry.

The third misunderstanding: In the next 20 to 30 years, the development of artificial intelligence technology will exceed a certain critical point, after which artificial intelligence will develop freely without human control. According to the strong closedness criterion and the current state of global artificial intelligence research, this “singularity theory” has no scientific basis in the technical scope. Some conditions included in the closedness criterion, such as the semantic completeness of the model and the limited certainty of the representative data set, can usually only be met with the assistance of manual measures required by the strong closedness criterion. Imagine that it is possible to break through these restrictions in the future, and artificial intelligence has the ability to break through these restrictions, which are completely different. Even if a certain limit is breached in the future, there will be new restrictions to restrain it. This type of statement virtually assumes that there can be artificial intelligence technology that is out of specific conditions. Whether this technology is possible or not is currently not supported by any scientific evidence.Sustainability is subject to future observation and research.

These three misunderstandings are the main ideological obstacles to the development of artificial intelligence in China. The principle of closedness and strong closure is based on the nature of existing artificial intelligence technology, providing a basis for eliminating these misunderstandings, and also for observing, thinking and studying other problems in the development of artificial intelligence, and avoiding the interference of repeating the artificial “cyclical fluctuations” of the past. Provides a new perspective.

This article is from WeChat public account: cultural aspect (ID: whzh_21bcr ) , author: CHEN Xiao