A sad moment has come to pass in Tempe, Arizona, where a pedestrian was killed by a self-driving car. As with any loss of life, it causes suffering and grief, and we would not be human enough if we didn't lament the loss.
But this event shouldn't used as a blunt instrument to forestall the development of autonomous vehicles altogether. Sad as it may be, it's not cause enough to stop the self-driving car.
Life is full of risks. Any reasonable accounting for risk must include careful attention to big risk factors. And there is simply no bigger risk factor related to automobiles than human error. Human error is the leading cause of car crashes by a margin so large that it isn't even close: Drivers and their mistakes and misjudgments cause 94% of the more than 2 million crashes in the United States each year.
That's not just a big factor, it's overwhelming. Nothing will have a bigger effect on making automotive travel safer than limiting the frequency and magnitude of human errors behind the wheel.
Ultimately, that means replacing us (human drivers) with something safer. Truly autonomous vehicles will eventually be a revolutionary lifesaving tool. The technology that will enable a near-zero frequency of accidents will be here in time -- probably much sooner than most people imagine. But we need to acknowledge three things along the way:
First, even if self-driving technologies could be completely perfected, accidents would still happen. They would be infrequent, but they would still happen, because sometimes the unexpected is also completely unavoidable. We must brace ourselves for the fact that there will never be a completely crash-free world. But one also reasonably imagines that crashes in self-piloted vehicles will, on average, be made less harmful than crashes involving human drivers, since self-piloted vehicles ought to have faster response times than human drivers -- and every moment of additional braking or avoidance time that isn't lost to human reaction times ought to create an additional margin of safety. (In fact, the evidence suggests that the incident in Tempe may have been completely unavoidable, for human drivers and computers alike.) What will matter most is whether we have fewer deadly accidents with autopilot at the wheel than we would have had with humans doing the driving.
Second, the road to truly self-driving vehicles is paved with the development and adoption of "guardian-angel technologies": Tools that contribute to accident avoidance and harm reduction, whether the vehicle is under the command of a human driver or an auto-pilot. Guardian-angel technologies range from automatic braking systems to weather-adaptive handling to lane-drift controls and much more.
Ideally, these systems will contribute independently to greater safety, but will also show themselves to be of even greater use when coordinated with one another. Truly robust systems for passenger safety will have independent, redundant safety systems in place, so that the failure of one (or even of the main auto-pilot system) won't turn catastrophic.
Unfortunately, we will discover some of our blind spots for these safety systems only when some of them fail. The future of self-driving vehicles as a mass-market reality will need to go through a transitional phase during which more and more of the guardian-angel technologies become standard equipment for keeping human drivers out of trouble -- ultimately reaching a stage at which the human driver just becomes superfluous. Human drivers shouldn't expect to be replaced all at once; it's going to be a slow-motion phase-out.
Third, we shouldn't fear a new risk just because it's unfamiliar while ignoring a much bigger risk just because we have become numb to it. About 35,000 Americans die in vehicle accidents every year. That's about 95 people a day. In other words, our experience with automobiles right now is very dangerous. Crashes are a leading cause of death for more than one age bracket. The frequency seems to numb us to the danger, and that familiarity breeds contempt for a real solution. Self-piloted cars are unfamiliar and thus are easy to perceive out of proportion to their true danger. We would have to be mad to show zero tolerance for failure with autonomous vehicles while showing endless tolerance for human error -- just because human error is more familiar. A new thing should not be judged against a standard of perfection when the thing it seeks to replace is held to such a low standard that it is a leading cause of death.
The sane approach to these vehicles is to recognize that we live in an imperfect world where accidents will inevitably occur. What matters is that we promote the adoption, development, and improvement of any tools we can find that can reduce the frequency and impact of those accidents. Tools that can make human drivers safer ought to be welcomed -- and tools that will ultimately replace us behind the wheel altogether should be welcomed, too -- and we should promote them heavily even if they are only marginally safer than human drivers.
The evidence suggests they will be much more than just marginal improvements, however. The logical destination remains one where all of the guardian-angel technologies, working in concert while also acting independently to provide a robust system of protection, ultimately take the weak link out of the chain of command -- even if that weak link is us.