Since being appointed CEO of Alphabet, Google ’s parent company last December, Shandar · Sundar Pichai recently published a column in the Financial Times , calling on the market to establish reasonable regulations and implement better regulation of artificial intelligence technology.

In his view, big companies cannot just blindly develop cutting-edge technology and then think about “determining how to use it through the market.” On the contrary, companies have the same responsibility to ensure that new technologies can benefit everyone without being affected Arbitrary abuse has a bad impact on society.

For the issue of “new technology is a double-edged sword,” Picai also listed some examples in history. For example, the advent of the internal combustion engine has expanded human exploration of the earth, but at the same time has caused many safety accidents.

In addition, with the gradual popularization of the Internet, the cost of information communication is rapidly decreasing, but it has also made it easier for many rumors and misrepresentations to spread.

“Artificial intelligence has the technological potential to improve the lives of billions of people, but the biggest risk is that we can’t do it. Therefore, regulation and legislation are still necessary.” < / blockquote>

According to Pichai, he advocates careful supervision of artificial intelligence technologies to avoid regulations hindering technological development. Among them, new fields such as self-driving cars need to formulate “appropriate new rules”; in the field of health care, a part of the mature regulatory system can be applied to auxiliary products of artificial intelligence.

This is not the first time Google has taken a stand on how artificial intelligence technology can be used. As a technology giant that is currently making rapid progress in the field of AI, Google will always be criticized by ethics on artificial intelligence projects.

Two years ago, Google caused a huge public opinion storm because of its military drone cooperation with the U.S. Pentagon. Pichai later published a blog post that clarified 7 guidelines for artificial intelligence, and emphasized that Google would not use artificial intelligence technology for weapons and large-scale surveillance.

But after this incident, many people also began to question whether large companies can hold on.Your commitment to “do no evil”. At that time, there was a view that it was only possible to ensure that enterprises can develop AI technology under ethical conditions through government supervision.

Google is not thinking about setting up its own regulatory board, they have tried this method in 2019 to ensure that AI projects comply with their own regulations.

However, the organization has been suspicious from the beginning. The reason is that there is a member on the list who has participated in the development of military drones, and another community has also made anti-gay speech, which makes people think that Google is intentionally balancing the opinions of radical and conservative, But Googlers obviously don’t want to be restrained.

In the end, this committee was only set up for 9 days and was announced to be dissolved by Google. It can be seen that the regulation of artificial intelligence is not as easy as expected, especially when different stakeholders are in the same room. Consensus is even more difficult.