Tuesday, May 8, 2012

Self-driving cars: Safety and Liability

Why would we want self-driving cars? Self-driving cars could reduce auto fatalities, let people be more productive with their driving time, lower stress, liberate people who aren’t able to drive for themselves, and do away with the concept of the ‘designated driver’. Later on, when self-driving cars dominate, they’ll be able to talk to each other to reduce traffic congestion by coordinating maneuvers and by letting each other know which roads are too congested. Lots of other, more speculative goodies may also be there for the taking.

Trust me, you’re going to love it.

So where’s the problem? In two words: legal liability. By removing the driver from the equation, the auto manufacturer is making the car’s on-board software (which is part of the product they’ve sold) responsible for any accidents that might occur. Even if the overall accident rate drops significantly, the manufacturers can’t succeed if they’re paying out several million dollars each time the system fails.

Further, you can expect that in many of the collisions between human-driven and automated cars, the human will be at fault but the judicial system will convict the machine instead. So even a perfect system doesn’t eliminate the liability issue.

There are other problems as well. Each state in the US has a thicket of laws explaining in detail how a driving vehicle should behave, and most of these laws assume that there is a human at the wheel. Manufacturers aren’t going to field self-driving cars in any state where it’s unclear whether the “driver” spoken of in the laws refers to the person sitting behind the wheel or the software that actually controls the car.

You got a solution? I have some ideas. I’m nowhere close to being an expert in this domain, so be very skeptical of them.

Let’s start by certifying the self-driving systems. I’m imagining four levels of certification:

  1. Experimental Autonomous Vehicle
  2. Highway-only Autonomous Vehicle
  3. General-purpose Autonomous Vehicle
  4. Driverless Vehicle

Level 1 is meant for manufacturers who are still refining their technology, and for new, incompletely tested software builds for existing hardware configurations. These cars could only be used in self-driving mode if there was an alert employee of the manufacturer sitting behind the wheel, ready to take control of the car in an emergency.

Level 2 is meant for public use, but autonomous driving mode would only be available on the highway. My thinking here is that normal highway driving is a much easier problem to crack than driving through a neighborhood. The driver would not have to be paying attention to the vehicle, but he or she would still need to be able to take over and operate the vehicle at a moment’s notice.

This level is meant as an interim step on the way to fully autonomous vehicles. There would still need to be an awake, licensed, sober driver sitting behind the wheel.

Level 3 is much like Level 2. The only real difference is that autonomous mode could be engaged on surface streets as well as highways.

Level 4 (Driverless Vehicle) should be self-explanatory: the person using the vehicle does not require a license, and doesn’t have to be able to operate the vehicle. Puke-your-guts-out drunk? No problem. Nine years old? No problem. Blind in one eye, can’t see out the other? The roads are yours to command.

Level 4 also opens up the possibility of microcar delivery services. So when you order a pizza, a small electric vehicle – perhaps large enough to hold ten extra-large pizza boxes – delivers it to your driveway. So long as there are legacy drivers (a.k.a. “humans”) on the road, these little cars are going to have to drive very defensively.

But all these levels require some sort of certification, to ensure that they’re safe being used in those ways. I’m guessing this would be the responsibility of the NTSB, but what do I know? Now, testing in the real world isn’t easy. Human drivers are actually pretty safe, (2009 passenger fatalities were 1.14 per 100M vehicle miles) so you have to put a lot of miles on an autonomous car before you can show that it’s safer.

So I expect that much of the certification testing will be done in software, in virtual simulations that demonstrate the vehicle’s responses to different scenarios. This form of testing might be required for each new version of the onboard software. So the certifying agency is going to need a data center full of computers, and (if the testing regimen is going to be efficient) there will need to be standards for auto sensors so that they can be accurately be simulated.

Post-certification testing would be very helpful. If all the cars on the road were collecting and submitting data about the driving situations they get in, what choices they make, and how those choices succeeded or failed, new and dicey situations could quickly be added to the testing suite. Thus, the next version of the onboard software would be better able to handle those situations.

But the specifics of the testing regimen are less important than the results: vehicles that pass the tests and are certified for the appropriate level should be expected to be “safe.” For Level 1 (Experimental Vehicle), we might be lenient, targeting maybe 3 fatalities per 100M miles, meaning that they’re actually somewhat less safe than the average human driver. For Levels 2 and 3, we’d expect a much higher safety standard, say 0.5 fatalities per 100M miles. Level 4 would need to be dramatically safer, to the point where people would eventually consider it irresponsible to question the vehicle’s judgment. A target of 0.05 fatalities per 100M miles would reduce auto deaths by over 95%. I think that’s the sort of safety standard that would be needed before you could persuade the public that we should trust these vehicles completely.

Now, these safety standards wouldn’t be static. As we gain familiarity with the problem space, it might be possible to dramatically tighten the standards, saving even more lives. Or we might discover that, however we try, we can’t make an automated vehicle that is significantly safer than your average human driver.

Legislation: Once the certification is in place, it’s much easier to write appropriate legislation. Here’s what I propose to get the legal liability issues out of the way: If the vehicle has been certified, and the vehicle is being used as described in the given level, both the driver and the auto manufacturer are absolved of any legal liability. Their insurance company will pay their auto damages and medical costs, and the other driver’s insurance will pay those damages.

For Levels 1, 2, and 3, from a legal standpoint, the person operating the vehicle should be considered the driver. For level 4, the vehicle itself should be considered the driver, and laws that presume the driver to be a human being will not apply.

The bigger point here is, this technology presents huge opportunities to make our lives better, and it would be a shame to let legal liabilities block it for decades. Manufacturers will need some immunities from lawsuits (or at least caps on payouts) before it’s safe to enter the market, but that immunity should come at a price: they need to show that their cars are much safer than human drivers.