When Corporate Insight attended the 2018 Auto Insurance Report National Conference, host Brian Sullivan predicted that we are at least five years away from a fully autonomous car. Recent setbacks to Tesla’s and Uber’s autonomous research programs indicate that numerous hurdles remain before a vehicle is designed that can drive itself completely independently of a safety operator.
Crashes involving both Uber- and Tesla-backed self-driving cars have thrust the debate over autonomous vehicle safety into the national spotlight. On March 18, an Uber autonomous test vehicle killed a pedestrian even though the car’s radar successfully detected an object in its path, according to a preliminary report released by the National Transportation Safety Board. Emergency braking, Uber says, is not enabled when its self-driving cars are under computer control to decrease potential erratic behavior. Compounding the problem, the car’s human safety operator was monitoring the self-driving car system rather than the road immediately before the car struck the pedestrian. In response to the incident, Uber shut down its testing operations in Arizona, though it plans to resume testing of its self-driving technology in Pittsburg this summer.
An Uber Autonomous Volvo XC90
Separately, police have found that a Tesla Model S that crashed into a stationary firetruck in Salt Lake City, Utah, while in Autopilot Mode actually accelerated in the seconds before the crash, likely in response to the car ahead changing lanes. The Tesla’s Autopilot Mode, however, failed to notice a stopped firetruck ahead, and the driver neglected to keep her hands on the wheel for 80 seconds before the crash, an action Tesla requires of all drivers using Autopilot Mode. Similar to the Arizona Uber crash, therefore, no human operator had control of the vehicle directly preceding the crash.
Rendering of Tesla Autopilot Technology
While both the Salt Lake City and Arizona crashes involve a combination of software failure and negligence on the part of the vehicle operators, they, along with other autonomous vehicle crashes, have largely served to erode Americans’ confidence in autonomous vehicle technology.
A recent American Automobile Association survey found that just under three-quarters (73%) of American drivers say they would be afraid to ride in a fully self-driving car, up significantly from 63% in late 2017. Further, a new study published in the Risk Analysis Journal from the Society for Risk Analysis attempted to determine the socially acceptable risk of self-driving cars, finding that the vehicles would have to be four to five times safer than their human-driven counterparts for respondents to feel comfortable having them on the road. This finding is consistent with previous studies, which found that individuals increase their safety demands when their well-being is entrusted to an external factor.