This article is from the public number: Wei Xizhibei (ID: weixizhibei) , author: Wei Xi, Internet columnist, commercial product by the Management, original title “ Artificial intelligence or artificial retardation? ——Large algorithm rollover scene 》, the title picture comes from: worm creative

I’m going to be a

Do you believe in algorithms?

No matter what your answer is, our lives have been completely changed by algorithms-behind our chats on WeChat, trembling, and shopping on Taobao, there are countless algorithms to support it. The algorithm is simple if, then, The else rule has become a deep neural network that is becoming more and more complex so that even programmers do not know the internal operating rules. It has become complicated while also revolutionizing every industry. People cannot do without algorithms. The media like algorithms. In the rendering of each story, the algorithm seems to be omnipotent. Wei Xi will take everyone to see some amazing examples of algorithms today-

1. In recent years, a new type of drug crime has appeared in Maryland and other places in the United States: behind closed doors in mansions, using LED lights to grow marijuana.

In the United States, it is impossible to search through the door without evidence, and the police are very headache. However, in 2010, a police station obtained the data of the local smart meter through the power company. Through the algorithm analysis of the power consumption and the power consumption pattern, They successfully judged and caught a group of drug dealers!

2. In 2017, an engineer in Silicon Valley wanted to work on Reddit, he had a clear idea -first wrote a high-level article on how to improve the Reddit recommendation algorithm, and then he Through the Reddit website CEO Huffman opened his account on Facebook and found some unique advertising targeting, such as Huffman’s gender, age, place of residence, which homepages he followed, and so on. Then he used Facebook’s advertising system to use thisThese directional algorithms posted articles written by 197 people, and they actually hit Huffman with precision. This promotion only cost him 10.6 dollars. In the end, his article was recognized by Huffman, and he was successfully accepted. .

3. In July 2012, an angry dad walked into the Virginia branch of American retail giant Target, asking to see the manager because his daughter who was still in high school received a gift from Target. Coupons for her daughter’s crib and baby clothes -“What do you mean? My daughter is only 16 years old. Are you encouraging her to get pregnant?”

Tagit ’s manager apologized in haste to say that it may be a mistake in their work. However, two months later, the father called to apologize for his previous behavior—her daughter was indeed pregnant. It turned out that Tagit A special algorithm system is designed to determine whether a girl is in pregnancy based on the purchase history of retail users. This algorithm is so accurate that it actually knows whether a girl is pregnant earlier than the child’s father.

Indeed, these three stories are just the tip of the iceberg in the application of algorithms. Today, algorithms can recognize our sounds and images, and algorithms seem to be omnipotent.

However, is the algorithm really so beautiful, do we really think about the disadvantages of the algorithm while the algorithm brings us convenience, have we really thought about how to deal with the algorithm that may bring us Coming disaster.

Today ’s algorithms are actually far from perfect. Many things called artificial intelligence can only be regarded as artificial mental retardation in some sense. Wei Xi will show you some large algorithm rollover scenes-

I. Supercomputer for rollover

On March 19, 2017, Hong Kong real estate tycoon Li Jianqin (Li Kin-Kan) I met Italian financier Costa (Raffaele Costa) for the first time during lunch in the Dubai Hotel. Describes a robotic hedge fund, whose fund is managed by a supercomputer named K1 developed by the Austrian AI company 42.CX. K1 evaluates investors’ performance with deep learning algorithms by capturing data from real-time news and social media Sentiment and predicts US stock futures, then sends instructions to trade.

Hong Kong property magnate Li Jian-Kan

Li was very interested after seeing it. In the next few months, Costa and Li shared the simulation results of K1. The data shows that K1 has achieved more than double-digit returns. Li is very excited and puts his 25 K1 billion of assets were given to K1 to take care of, ready to make a fortune in the financial market.

However, the reality is cruel. The supercomputer K1 did not bring rich returns to Li. On the contrary, in February 2018, it often lost money, and even lost more than 20 million dollars a day. The market didn’t work, and he took Costa to court in anger, claiming that he had exaggerated the role of supercomputers.

Second, Amazon Smart Assistant that is out of control

On July 5, 2017, an ordinary resident named Oliver in Hamburg, Germany stayed at a friend’s house for one night. What he didn’t know was that on the night he left, the Amazon smart speaker Alexa at home suddenly started at 1 am : 50 began to play rock music at the highest volume, the sleeping neighbors were woken up by the loud speakers, but the helpless neighbors could only choose to call the police.

The police arrived at the scene and chose to pry open the door and break into the door, only to find that the culprit was only a small one.A small smart speaker, they unplugged Alexa and installed a new lock on Oliver. Oliver, who had spent the night at a friend ’s house, knew nothing about the incident. When he returned home, Oliver was completely confused. Was able to run to the police station and paid a lock exchange bill that was not cheap.

Coincidentally, in January 2017, the CW6 TV channel in California reported a loophole in the Amazon Echo speaker, saying that Alexa could not identify members of the family, so a 5-year-old girl in California used a smart speaker to herself I bought more than 300 US dollars of cookies, and their parents were dumbfounded when they received the goods. What made people laugh was that when the host broadcast the news, they said, “Alexa, order me a toy house.” “As a result, many people in San Diego reported that their speakers received the voice of the TV and really ordered the toy house. Amazon later had to apologize for it.

Three, bad Microsoft robots

In March 2016, Microsoft developed an AI chat bot named Tay on Twitter. The bot was constructed by mining conversations among netizens. Tay’s first sentence was “hellooooooo world !!!”, at first It is understanding, lively and cute, and has a great time chatting with netizens on Twitter.

However, after just 12 hours, Tay has changed from a friendly robot to a swearing, racist and saying “feminists should all die in hell and burn to death” demon robots, this Let the Microsoft who developed it go through a public relations nightmare. Microsoft was forced to close Tay quickly, and this has not been more than 24 hours since it went online;

Tay is a microcosm of artificial intelligence mapping human prejudice. The most essential rule of current artificial intelligence algorithms is that it requires a large amount of data to train them-if the training data itself is biased, wrong, and extreme Thinking, the result of training will deviate from the normal result …..

Four, dangerous Watson cancer robots

In 2013, IBM partnered with the University of Texas MD Anderson Cancer Center to develop “Watson for Oncology,” Watson’s cancer robot, whose goal is to identify and cure cancer. IBM announced in a press release that “Watson The mission of the Cancer Robot is to enable clinicians to discover valuable insights from the cancer center’s extensive patient and research database. “But what about the end result?

The news agency StatNews reviewed IBM’s internal documents in July 2018 and found that IBM’s Watson sometimes gives doctors wrong or even dangerous cancer treatment recommendations, including Watson’s advice to doctors who have severe bleeding symptoms Cancer patients use drugs that aggravate bleeding ….

So in February 2017, after spending $ 62 million, the University of Texas announced the termination of this project with IBM. Algorithms sometimes do not work for the medical industry;

Five, re-criminal algorithms full of discrimination

In the U.S., criminals are assessed for a recidivism before being released from prison, Used to determine whether it is appropriate to release the prison or not, if necessary, necessary monitoring measures must be taken.

So how do you evaluate the recidivism probability of a criminal? The answer is-algorithm! The US judicial system uses a risk assessment product from a company called Northpointe. Northpointe’s core product is a set of scores obtained through a specific algorithm based on the answers to 137 questions, some of which are directly related to the offender himself Information, such as the type, date, frequency, date of birth, gender, etc. of previous crimes, and some are answered by the offender himself, such as “Have one of your parents or siblings been sent to jail or prison?” “,” How many friends have you ever encountered with marijuana? “,” Do you agree that hungry people have the right to steal? ”

It is worth noting that race is not one of these issues, that is, all these issues will not mention the race of the criminal;

However, in recent years, some scholars have found that the algorithm brings twice as high a risk of recidivism to blacks as whites. In Los Angeles, a black woman with a minor crime is labeled as “high risk”, and one The white man who was armed for the second time was flagged as “low risk” and it turned out that the woman did not commit a crime, but the man continued the theft. This risk assessment product is currently widely questioned by black groups in the United States; / p>

Six, all kinds of artificial mental retardation

In fact, the cries or even dangerous stories caused by algorithms are widespread. At least at this stage, in many fields, artificial intelligence can only be called artificial mental retardation at some time-

Anti-terrorism has become the focus of national security since 9/11 in the United States. The U.S. security agency will treat every air passenger based on name, place of birth, religion, face recognition algorithm, historical behavior data, such as all travel data, etc. It ’s the terrorist ’s suspicion that scores, and some innocent people often appear at the airport because they are suspected of being terrorists.Detained for inspection and missed planes many times, such incidents will exceed 500 each year;

Google ’s Android system will come with an app—Photos. This application that incorporates artificial intelligence algorithms can automatically recognize faces, objects, etc., and is very powerful. However, in June 2015, an Internet user posted on Twitter “Google, my girlfriend is not a gorilla.” It turns out that Google Photos identified his girlfriend’s photo as a gorilla;

Facebook has a feature called “memories” that can highlight to users what happened on this date in previous years and remember unforgettable memories, but Facebook still underestimates some extreme situations, such as the The anniversary of the family’s death shows a picture of the family, or it may ask itself to say happy birthday to a friend who has died.

A 2011 biology textbook on flies was priced at $ 23 million on Amazon. It was later discovered that the reason was that the two sellers set up an algorithm to observe each other’s prices and then reset their prices.

In 2012, The Wall Street Journal reported on Staples ’algorithmic discrimination behavior of office supplies company. Staples first judged whether there are many physical stores selling office supplies near the user ’s location. If there are no physical stores within 20 kilometers, the user is judged. Most likely, it can only be bought online, so its online store will show these customers a very high price. In this scenario, it is not aimed at one person, but thisA group of people in the area can’t see even the people nearby.

The intelligent traffic cameras in many cities in China are equipped with artificial intelligence algorithms to detect and identify those who cross the road when the light is red. However, a recent Ningbo camera accidentally exposed the photo of Gree’s president Dong Mingzhu crossing the road. The original camera Recognize Dong Mingzhu’s head in a bus advertisement as a pedestrian;

Early in the early morning of March 20, 2018, Uber hit an 49-year-old middle-aged woman named Elaine during an autonomous driving road test in Tempe, U.S., and the latter died on the spot. Walking on the crosswalk, the car incorrectly identified it as a car 5.6 seconds before the impact, and recognized it as another object 5.2 seconds before the impact. After that, the system was in chaos, swinging between “car” and “other” , A lot of time wasted, so the vehicle did not brake in time, causing a tragedy;

Okay, seeing so many accident scenes where “artificial intelligence” becomes “artificial retardation”, what we need to think about is how did these incredible problems arise? Friends who have seen Wei Xi Changwen will be very familiar. Wei Xi is usually more interested in the deeper underlying logic behind it. Next, let ’s take a look at the reasons behind algorithm failures. I summarize them into three categories ——

1. The algorithm itself or the person behind the algorithm generates a technical error —— As long as the algorithm is written by a person, there must be a probability of error, such as the smart speakers that ran out in the early morning in German residents Uber’s self-driving cars are caused by bugs in the program. The method we have to overcome is actually relatively simple.

But we may sometimes be powerless with another algorithm for artificially calculating consumers, such as the price discrimination of the above-mentioned office supplies website Staples; Didi has also been complained by the public that “the price of taxis for different people at the same distance is inconsistent. “Big data kills mature” phenomenon, whether real or not, such problems are often difficult to identify, which also increases the difficulty of supervision;

2. CalculateMethod’s neglect of human nature -you may have heard this paragraph: a beauty looks for a boyfriend through a most modern artificial intelligence device, the input conditions are: 1, be handsome; 2, have a car, artificial intelligence gives The result is chess; although this is a paragraph, it also shows in a sense that there is still a huge gap between the current artificial intelligence and the true understanding of human feelings and behaviors. Facebook reminds you that there is a day blessing for the dead loved ones. The essential reason is that AI can’t really understand what death means to humans;

3. The bias of the algorithm training data itself -At present, the basic logic of artificial intelligence is to first construct a suitable machine learning model, then use a large amount of data to train the model, and then use the trained The model comes to predict new data. There is a very important premise here: the importance of the input data. For example, the prediction of the re-crime rate above is problematic because the input data itself is biased. If the real world data itself exists, Prejudice, then there must be prejudice in the prediction results;

To sum up, the megatrend of artificial intelligence will inevitably continue to develop, but we also need to be soberly aware of its limitations and problems at this stage. Do not exaggerate and render its magic. How to solve the algorithmic belt from a system perspective Come to these incredible questions, welcome to post your opinion in the message area!

This article is from public number: Wei Xi refers to the North (ID: weixizhibei) , author: Wei Xi, Internet columnist, Commercial product manager, specializing in hard-core Internet content, dedicated to analyzing the underlying logic of the Internet and advertising in simple language