This article tells the AI ​​story of 2019 in a parallel way and tries to separate their meaning.

Editor’s note: This article from the micro-channel public number “ YORK smart “(ID: smartman163), selected from linkedin author: David Foster, the compiler: NEW YORK intelligent, participation: small.

Review of AI in 2019: stable development or disillusionment?

For the field of artificial intelligence (AI), 2019 can be called an extremely busy year.

The speed of AI advances and headlines has filled awesome and proud moments in our daily lives. Of course, sometimes it is full of annoying thoughts that our society is still not fully prepared for the arrival of the AI ​​era. Is 2019 a year of significant progress in AI or a year of disillusionment? As researchers quickly overcome benchmarks that were previously unattainable, can we say today that this field is on a stable track?

With the help of Applied Data Science Partners, we want to take a step back and prioritize and look at AI activities in 2019. In the spotlight, it is important to separate the initial appeal of a piece of work from its actual importance and its impact on the field.

For this reason, this article recounts the AI ​​stories of 2019 in a parallel manner and attempts to separate their meaning.

Next, let us review the development of the AI ​​field in 2019:

1

Reinforcement learning regression

If we choose to describe the development of AI in 2019 in one sentence, it is likely to be: “Reinforcement Learning (RL) is back, and it looks like it will continue this trend.”

Most of us may be familiar with supervised learning models so far: some people have collected a large amount of training data, provided it to machine learning algorithms, let it refine the model, and then help us make predictions And classification. Some of us may even have the impression that AI is synonymous with supervised learning. However, it is just one of the many types of machine learning we have today.

In reinforcement learning, agents learn through trial and error, and judge their behavior through interaction with the environment. When multiple intelligent agents are involved, they are considered a multi-agent intelligent reinforcement learning system.

This field has been around for decades. Conceptually, it sounds more like a smart learning mechanism than a supervised learning model. However, it was not until 2015 that British artificial intelligence startup DeepMind gained traction, when the company used Deep Q learning (a combination of classic reinforcement learning algorithms and deep neural networks) to create agents that could play Atari games. In 2018, the artificial intelligence research and development organization OpenAI also established a foothold in the field by conquering the “Montezuma ’s Revenge” game. This isAn Atari game considered particularly difficult.

In the past few months, significant progress has been made: these efforts have revived the beliefs of the reinforcement learning research community. In the past, reinforcement learning was considered too inefficient and simple to solve complex problems, even games.

Another use case that has made significant progress this year is Natural Language Processing (NLP). Although researchers have worked in this area for decades, a few years ago, text generated by natural language processing systems didn’t sound natural enough. Since the end of 2018, people’s attention has shifted from past word embeddings to pre-trained language models, a technology borrowed from computer vision for natural language understanding.

Training these models is done in an unsupervised way, which enables contemporary systems to learn from the massive amount of text available on the Internet. As a result, these models become “knowledgeable” and develop the ability to understand context. Then, using supervised learning can further improve their performance on specific tasks. This practice of improving machine learning models by training different tasks belongs to the field of transfer learning and is considered to have great potential.

Natural language understanding technology has been gaining momentum since last year. Google Bert, Elmo, and ulmfit systems were launched at the end of 2018. A discussion of understanding whether the system is ethical.

2

The idea is mature

This year also witnessed the recent maturity of some deep learning technologies. The use of supervised learning applications, especially computer vision, has spawned successful real-world products and systems.

Generate adversarial networks (GANS) have reached a perfect level, where generator networks try to trick the discriminator network by learning to generate images that mimic training data. Obviously, creating artificial but realistic images of people and objects is no longer at the forefront of AI. In 2019, the art generated by AI even broke away from the hypothetical discussions of the past few years and became part of today’s museum decoration and auctions.

Computer vision technology has also been applied in areas of significant commercial and social interest, including self-driving cars and medicine. The adoption of AI algorithms in these areas is naturally slow because they directly interact with human life.

At least so far, these systems have not been fully autonomous, and their goal is to support and enhance the capabilities of human operators.

The research team is working closely with hospitals to develop AI systems for early disease prediction and to organize huge health data archives, a notable example is the ongoing collaboration between DeepMind Health and the University Hospital of London (UCLH) . However, most of these efforts are still experimental, untilSo far, SubtlePet, a software that uses deep learning to enhance medical images, is the only AI system approved by the FDA.

3

The sleeping giant

AutoML is a sub-area of ​​machine learning. Since its emergence in the 1990s, it has aroused great interest in 2016, but somehow never made headlines, at least not as much attention as other AI trends. . Perhaps this is due to its less fancy nature: AutoML aims to improve the practical efficiency of machine learning by automatically making the decisions that today’s data scientists make through manual, violent adjustments.

In the past three years, our understanding of this field has changed. Today, most large companies offer AutoML tools, including Google Cloud AutoML, Microsoft Azure, Amazon Web Services, and DataRobot. This year, people’s interest turned to evolutionary methods, and learning to evolve the AI ​​framework (LEAF) became the most advanced technology. However, AutoML has not yet reached the level of maturity that allows fully automated AI systems to perform better than teams of AI experts.

4

Worries about AI

Although it has achieved great success, this year the field of AI has also brought us many frustrating stories. A major problem is the bias in machine learning models, which did not appear until 2018, when Amazon discovered that there was a gender bias in its automated recruitment system, and COMPAS, a judgment tool widely used in US courts, was also found to be biased against gender and race .

The number of such examples has increased significantly this year, and it can be said that the public and institutions are increasingly skeptical of existing AI systems for automated decision-making. Take a few examples as proof:

—— A number of hospital algorithms were found to be biased against black patients in October;

—— The AI ​​system used to issue British visas was accused of racial bias by a human rights organization in October;

—— Apple’s credit scoring system was blamed by customers in November for gender bias.

Prejudice is a particularly worrying issue because it lies at the core of supervised deep learning. When biased data is used to train the algorithm and the prediction model cannot explain it, we cannot really judge whether there is bias. So far, the response from the research community has been to develop technology to understand the reasons behind deep model decisions. But experts warn that if we take the right approach, many issues can be resolved. Google Cloud Model Cards is a recent attempt by the organizational community to move towards an open source model, with a clear description of its nature and limitations.

Another worrying recognition this year is that the more complex a technology becomes, the more likely it is to be abused. Deepfakes are a byproduct of Gans. Deep learning algorithms are used to create pictures or videos of real people in purely fictional scenes. You don’t need to have much foresight to see how this technology can be used to spread false information, from political propaganda to bullying. This problem cannot be solved by scientists alone. History has proven that they are not good at predicting the impact of their findings on real life, let alone controlling them. This requires dialogue from all sectors of society.

To quantify the value of AI today is difficult, but one thing is certain: AI has left the fields of science fiction and avant-garde computer science, and now it is time to invest heavily in this area. Earlier this year, the three major deep learning researchers received the Turing Award, a long-awaited recognition of AI as an established field of computer science.