Tempe’s Self-Driving Accident: Should Performance Data Be Shared?

PinIt
broken glass

The recent deadly Uber self-driving vehicle accident in Arizona shows the tech still needs to improve. Should all players in the market be sharing use data?

There have been dozens of articles about who is to blame in the Uber crash involving a pedestrian walking her bicycle and an autonomous Uber car which had an emergency backup driver who would take over when the car made a mistake.

What went wrong? Did the program “see” the bicycle and assume the pedestrian was riding the bicycle more quickly than she was actually moving her bicycle? If the driver was paying more attention, would she have spotted the bicycle in time? While we should mourn the people lost due to immature technology, we should take comfort in that we have information and have the opportunity to add the information to vehicular programs so this will never happen again.

The various car companies are building different programs, and it would be best if these companies collaborate and share information regarding automated driving heuristics. This would not just make the roads safer for driverless vehicles, it would improve the public’s trust in self-driving cars. The reason for this is that humans can learn from errors, but they cannot guarantee they will not make the same mistake again.

Many like to quote that ~90% of all traffic accidents are due to human error: no driver can be constantly aware of all possible dangers or can guarantee to respond appropriately in an emergency. By contrast, a computer will record a previous mistake, “learn” the proper countermeasures, and it will not make the same mistake again. The computer may have a glitch or receive a virus, but otherwise, it should theoretically never make the same error twice.

In self-driving technologies, who is liable?

The legal system will look at this tragic incident to figure out who has legal liability, but industry and academia need to look at the incident to see how to prevent it from happening again. The LIDAR that was attached to the Uber should have spotted the pedestrian from far away, even though the pedestrian was walking with a bicycle in the dark and not in a crosswalk. The LIDAR should have spotted her and reacted accordingly.

Did the Uber fail to stop because the LIDAR could not pick up the pedestrian? Did the vehicle believe the pedestrian was riding the bicycle as opposed to walking alongside it, fooling the vehicle into thinking the “cyclist” would successfully move away? Or did it successfully notice the pedestrian but failed to stop the vehicle in time? If it was a sensor issue, then it resembles the Tesla incident from 2016, when a Tesla crashed into a white truck because it thought the truck was part of the sky. We, therefore, have to improve the vehicle’s ability to understand the situation and react accordingly. If the problem involved reaction time, we need to make sure the vehicles respond more quickly in the future.

Vehicles do not have to make the same mistakes. The family of the victim of the 2018 Tesla crash claims the vehicle’s Autopilot mode kept driving the vehicle too close to a divider on the road. One day, it drove too close and crashed killing the victim in the fire that consumed the front of his vehicle. Tesla said its program had driven in the same area about 85,000 times since Tesla vehicles gained the program in 2015, and Tesla vehicles in Autopilot mode successfully drove in the same place 200 times a day since then.

The company claims this time the program failed because a safety barrier at that location had collapsed in a different accident, and the Autopilot’s failure to account for it led to the crash. If Tesla is correct and the vehicle did not understand the barrier had collapsed, then the program needs to be changed so it can take new environmental data into account. The driver had reportedly complained about the Autopilot to a Tesla dealer, suggesting he knew something was amiss.

Tesla says the driver had not put his hands on the wheel for six seconds and had “about five seconds and 150 meters of unobstructed view” of the lane divider before the incident occurred. Ultimately, autonomous cars will have to be consistently safe and not require drivers to “save” them from mistakes. It is imperative to learn how the car can correctly assess the situation so it reacts safely.

Sharing data on how this young technology “learns”

Uber and Tesla now have the opportunity to fix their problems because they know what their cars saw and how they reacted. The ability to prevent accident recurrence is likely the reason why Google/Waymo has not repeated its 2016 accident. Once Uber and Tesla have figured out how to prevent their recent accidents from recurring, they should share this information with other car manufacturers (OEMs) and technology companies. This would ensure these accidents will never happen again.

So if autonomous vehicles are not allowed to repeat mistakes — particularly mistakes that lead to personal harm — then all autonomous vehicles- not just those that share a manufacturer or an operating system- must learn from each other. Usually when companies develop new technology they develop it secretly from each other. Traditionally automotive companies compete on issues like safety, and right now many of the automotive and technology companies are developing their safety software and hardware separately from each other.

They are doing it this way so they can say their cars are safer than their competitors. While the free market is a wonderful mechanism for improving technology, there is economic value to socializing the data and ensuring as many people have access to the accident data and the security fixes as possible. There is a risk that there will be multiple vehicles on the road with underdeveloped autonomous vehicle programs, programs that will lead to repeats of the original errors. The average consumer may forgive autonomous vehicles in general if only one or two autonomous vehicle brands make errors, but if they believe errors are pervasive, then the entire autonomous project will be in jeopardy.

We should not allow companies to repeat traffic accidents that have already been solved. For example, we should not have a ride-sharing company repeat an accident that has already occurred, been analyzed, and successfully reprogrammed. The OEMs and the technology companies can learn from each other’s mistakes, to ensure that autonomous cars are safer than human drivers. What happened in Tempe and California were tragedies; if we share our information, they will not become statistics. This is not just a humanitarian call to ensure we all live longer, safer lives- it is a call that will help the autonomous vehicle industry as a whole.

Alexander Soley

About Alexander Soley

Alexander Soley is a consultant with expertise in connected vehicles and cyber security. He has consulted for Dell Technologies on connected vehicle regulation and strategy. He has also worked at JNK Securities, Delta Risk, the European Parliament and the International Diabetes Federation.

Leave a Reply