The new survey exposed more details, and the Uber unmanned vehicle system has a lot of hidden dangers.

Editor’s note: This article is from WeChat public account “AI frontline” (ID: ai -front), author Chen Si.

AI front REVIEW: 2018, Uber autonomous vehicles occurred in The world’s first dead car accident, a woman was killed in the middle of crossing the road. This incident immediately triggered global attention to unmanned vehicles, especially security issues, and Uber had to stop its global unmanned vehicle testing. More than a year later, as more details were made public, the cause of the Uber unmanned car deaths became more complicated.

First, let’s briefly review the case.

At 10 pm on March 18, 2018, a woman in Tempe, Arizona, was injured by a Uber self-driving car and died.

The local police department reported that at the time of the incident, although a driver (safety officer) was sitting behind the wheel, the car was in automatic driving mode and the speed was 38 miles per hour. The speed was 35 miles per hour and there was no attempt to brake. The car was heading north, and a woman was walking west to east from a crosswalk 100 yards (about 91 meters).

Sylvia Moir, director of the Tempe Police Department in Arizona, said: Initial investigations revealed that Uber may not be at fault in a traffic accident that killed a pedestrian. He said that the video from the camera equipped with the Uber self-driving car showed that the traffic accident was at the victim’s own – Elaine Herzberg, 49, not Uber. “According to the way the victim crosses the road, it is extremely difficult to avoid this traffic accident, whether it is someone or an automatic driving mode.”

However, according to the US National Safe TransportationThe official file issued by the committee (NTSB) on Tuesday did not seem to have been discovered by the investigation.

Uber unmanned vehicle exposed “big hole”: unable to correctly identify pedestrians crossing the road

The National Security Transportation Committee released a paper on a 20-month investigation into the Uber unmanned vehicle accident on Tuesday. The team will issue a final report on the incident within two weeks.

This 40-page document provides an in-depth look at the details of the case.

The

file indicates that the most obvious error is related to the software. Uber did not train drones to identify pedestrians outside the crosswalk.

In addition, Uber chose to turn off the emergency braking system built into the test vehicle, and test vehicle manufacturer Volvo later concluded that the brake system could have significantly reduced the speed at which the unmanned vehicle hit the victim.

Note: Some experts say that technically, the decision to turn off Volvo’s built-in system while Uber’s software is working makes sense, because for a car, having two software “masters” is not safe. .

This largely explains why, even though the car detected the victim and had enough time to park it, it hit it at 38 miles an hour.

An additional investigation revealed that when the car first detected the presence of the victim 5.6 seconds before the impact, it was identified as a vehicle and then classified as “other”. Finally, the unmanned vehicle identified her as ” bicycle”.

Uber unmanned car exposed fatal vulnerability: the system simply cannot recognize pedestrians crossing the road

The NTSB report shows that each time the vehicle attempts to make a new object recognition, it will restart the prediction of the object’s heading. So after 1.2 seconds of the impact, the system realized that the vehicle was about to hit the pedestrian and needed emergency braking. This triggers Uber’s emergency control mechanism: the system brakes for one second while confirming that “hazard has been detected”, during which the safety operator (Uber’s most important and last line of defense) can control the car and brake .

Unfortunately, the security officer was not watching the road when the accident occurred. Therefore, 0.2 seconds before the impact occurred, the car made an audible alarm, and the safety officer seized the steering wheel and took over the automatic driving system. But it was too late, and a fatal impact still occurred.

The Uber spokesperson said in a statement: “I regret the 2018 accident” and stressed that the company has its safety planA change was made. According to Uber as part of the investigation, the documents submitted to the NTSB have been in place for 20 months. Uber has changed its safety driver training and now has two safety operators per vehicle. The company also changed the structure of its security team and created a system where staff can report security issues anonymously.

The NTSB will hold a meeting on this event in Washington, DC on November 19th, when the investigators will issue a comprehensive report on the incident detailing what happened and which party is responsible. Researchers will also provide advice to companies such as federal regulators and Uber to better build technologies to prevent such incidents.

High-risk vulnerabilities exist in multiple enterprise autopilot systems

A recent report from the University of Illinois School of Engineering also revealed the safety hazards of some unmanned vehicle systems, including autopilot systems from companies such as Baidu and Nvidia.

The team analyzed all safety reports submitted between 2014 and 2017, covering 144 driverless cars with a cumulative mileage of 1,116,605 miles. They found that in the case of the same mileage, the probability of human accidents driving a car is only one-fourth of that of a driverless car. This means that unmanned technology can’t handle emergency situations at an ideal speed, and human drivers often need to take over quickly to stop the accident.

In an effort to improve safety, researchers and related companies have found it difficult for software to learn how to solve these problems in a pre-trained manner before the driverless system encounters a particular problem.

In addition, some errors in the software and hardware stack can only cause security issues in specific driving scenarios. In other words, driverless testing on highways or in open/supported neighborhoods may not be enough, as these software/hardware failures can still cause problems under other conditions.

When testing an already open AV technology (from the Baidu Apollo project), the team found more than 500 instances that proved that the software might not solve the problem in these cases and cause an accident.

When can automatic driving be achieved?

Steve Keckler, vice president of architecture research at Nvidia, once said: “The safety of driverless cars determines whether they can succeed in the market and in society as a whole.”

It is precisely because of these insurmountable security vulnerabilities in a short period of time that the implementation time of fully automated driving is delayed. Even Apple co-founder Steve Wozniak both sigh: Have to temporarily give up L5-level autonomous driving.

But there are still some news that will inspire the industry. Many technology companies have joined the “building a large army”, many test areas at home and abroad have started online Robotaxi automatic driving taxi service; old car manufacturers have also invested in automatic driving, Volkswagen is not long ago Announced the establishment of an autonomous driving subsidiary, and it is not known what new power this transformation will bring to the autonomous driving industry, but the positive attitude of all parties can still show some good development momentum.

However, please don’t forget this lesson a year ago. Unmanned vehicles are not toy cars, passengers or pedestrians. They are all living lives. They are not allowed to be sloppy. Only by solving safety problems can more people believe in unmanned vehicles and choose unmanned vehicles. Especially in China, the “old drivers” in the autonomous driving industry still have to work harder.