Thursday night, Tesla unveiled its dual-motor variation of the Model S sedan, and it also revealed several technologies that move part way down the road to self-driving cars. One system uses cameras to read street signs and set the car's speed to the posted speed limit. Another system keeps track of lines on the road and can steer the car between the lines without human intervention. A third system brings the car to a stop if it senses an obstacle in front of the car. These capabilities were implemented in other cars before Tesla, but in every case, they're intended to be used with a driver actually driving the car. Even though Elon Musk and other Tesla representatives took their hands off the wheel and their feet off the pedals to show how the systems work, they can't tell, advise or even suggest that Tesla owners do the same, for now.
Elon Musk was quoted as saying that he believes that a commercially-priced self-driving car is possible within five years, and I believe him. However, it may be 20 years before we start seeing fully self-driving cars in commercial use. The reason isn't technology or cost. All the pieces are there, and the prices of even the most expensive items like LIDAR systems are dropping dramatically. The real problem is liability: If a self-driving car is involved in an accident, who's responsible? With today's cars, one or more of the drivers involved in the accident is usually found to be responsible for the accident and liable for damages. There are cases where an automobile or parts manufacturer is found liable, as in some of the accidents caused by faulty Firestone tires on Ford Explorers, and the stuck accelerator problem on some Toyotas. However, manufacturers are very rarely successfully implicated in car accidents.
Self-driving cars change the liability argument dramatically. It's extremely likely that self-driving cars will be safer than cars driven by humans: Cars don't get distracted, they don't drive drunk, they don't get into arguments with passengers, and they don't experience road rage. Their sensors and computers should respond more quickly to changing conditions than humans can. They won't drive over the posted speed limit, and they won't follow other cars too closely. A working self-driving system should be much safer than a human driver. However, technology can fail. An excellent example from 2005 is a test performed by Stern TV on three S-Class Mercedes with an early version of self-braking. Three cars plowed into each other because the system wouldn't work in a steel-framed garage. Presumably, that problem has been long since fixed, but electronic systems in cars do fail, especially when they're still new.
If a self-driving car is in an accident, who's to blame? For simplicity, let's assume that any other cars involved in the accident weren't at fault (although they might at least share the blame in a real-world accident.) If the person in the driver's seat wasn't actually driving, something in the car might have failed. But what if something did fail, the car alerted the passengers, and they ignored the warning? That might save the car manufacturer from liability, but the manufacturer would almost certainly be sued because it has the "deepest pockets." Or, what if the person in the driver's seat turned off the self-driving system before the accident? Then, the driver would most likely be at fault.
We won't see fully self-driving cars until their technology is sufficiently mature that the reliability of critical systems is around the "five-nines" level (99.999%,) or around one failure in every 100,000 hours of operation. In addition, car manufacturers are likely to push legislation that limits their liability in accidents involving their self-driving cars--the trade-off for the greater overall safety in self-driving cars is a limitation on liability if an accident does happen. Finally, we're likely to see legislation that requires one of the passengers in self-driving cars to be a licensed driver and to be able to take control of the car at any time. These conditions will probably dampen demand for self-driving cars, because of the cost of highly reliable systems, and because companies like Uber and Lyft may still have to employ human drivers.
For these reasons, fully self-driving cars may be technically feasible, and even practical from a cost standpoint, well before they're commercially available.
No comments:
Post a Comment