Here's the thing about autopilot all the way up through self-driving cars:
A perfect system would never have a fatality. EVERY problem that ends in fatality can ALWAYS be traced to some flaw or inadequacy. Every. Single. One.
That is why self-driving cars will eventually be far safer than any human driver (because every fatality can be stopped), and why the natural human tendency will be to crucify any company who attempts to enter the market[0,1], which will mean millions of more unnecessary deaths at the hands of human drivers because we'll delay deploying self-driving technology until it's perfect.
[0]Even if they, accurately, point out the driver is still responsible and that overall safety is improved. Accuracy doesn't matter, emotional resonance does. That's victim blaming!
[1]At the same time, this intense focus--while it's "unfair" given the masses of car deaths every day--is also what will drive the massive improvements of safety. So the inaccurate outcry can actually be a good thing. Provided that original player doesn't give up or go out of business first. This dynamic can help explain why airlines in the developed world are so ridiculously safe (...but also perhaps why airliner technology has been stagnant for half a century, with only safety and incremental efficiency/cost improvements).
> A perfect system would never have a fatality. EVERY problem that ends in fatality can ALWAYS be traced to some flaw or inadequacy. Every. Single. One.
I've been in a wreck on the highway from a random tire blow out. Happens all the time. While an AI will ultimately be able to handle that situation way better than a human, I don't see it eliminating all risk without just always driving slower.
"25. (Bowden's Law) Following a testing failure, it's always possible to refine the analysis to show that you really had negative margins all along."
Your tire blowout wasn't "random," from this perspective, it occurred due to a failure at some point: inadequately tight manufacturing tolerances, a regulator system or system integrator that accepted such tolerances, a random thing in the road (which AI could've picked up on), a temperature spike (which AI could've picked up on), a change in tire dynamics immediately preceding the incident which you didn't notice (which AI could've picked up on), incomplete inspections, etc.
Even if that specific failure is NOT directly something that AI would solve, the elimination of basically every human driving error would bring focus to any other problems. Right now, random tire blowouts are not the predominant cause of death and so people don't focus much on it, but if all other causes were eliminated, then all of a sudden everyone would be focusing on better engineered tires.
This is the same dynamic as in airline safety. Every airliner failure in developed countries is plastered over the media for months simply because it's so rare, therefore any remaining deficiency is brought into tight focus.
I can see the headline: "Tesla owner DIES in random tire blowout! Shoddy quality to control to blame; sharks circle as Tesla tanks!"
(High media coverage is a double-edged sword for Tesla...)
> I don't see it eliminating all risk without just always driving slower.
...it's worth noting that airliners use tires at hundreds of miles per hour, yet fatalities basically never happen. Things can be engineered to be arbitrarily safe.
If you are talking about improving the crash safety of cars in general by making vehicular structural changes and inventing new tire chemistry and manufacturing processes, that is a separate issue than self driving, to the extent it would improve safety for regular drivers too.
>...it's worth noting that airliners use tires at hundreds of miles per hour, yet fatalities basically never happen. Things can be engineered to be arbitrarily safe.
Aircraft are protecting all eggs in one basket. If you want to talk about the roadways being maintained and cleaned at the level of runways, you're crediting self-driving for something that isn't self-driving. Aircraft may have different structural scaling laws in play too (500 20 second touchdowns before replacement/refurbishment, short periods high stress).
An how far are you going to allow reengineering everything? Would it be ok to call slightly modified 747s that can taxi themselves self-driving cars? Trains?
".it's worth noting that airliners use tires at hundreds of miles per hour, yet fatalities basically never happen. Things can be engineered to be arbitrarily safe."
The passengers on Air France 4590 would beg to differ, if they could. The Concorde ran over debris, and the resulting in tire damage that punctured a fuel tank, leading to a fire and subsequent loss of life.
There a plenty of aircraft incidents that are caused by tire failures. Expecting technology to prevent every eventuality is folly.
A perfect system would never have a fatality. EVERY problem that ends in fatality can ALWAYS be traced to some flaw or inadequacy. Every. Single. One.
That is why self-driving cars will eventually be far safer than any human driver (because every fatality can be stopped), and why the natural human tendency will be to crucify any company who attempts to enter the market[0,1], which will mean millions of more unnecessary deaths at the hands of human drivers because we'll delay deploying self-driving technology until it's perfect.
[0]Even if they, accurately, point out the driver is still responsible and that overall safety is improved. Accuracy doesn't matter, emotional resonance does. That's victim blaming!
[1]At the same time, this intense focus--while it's "unfair" given the masses of car deaths every day--is also what will drive the massive improvements of safety. So the inaccurate outcry can actually be a good thing. Provided that original player doesn't give up or go out of business first. This dynamic can help explain why airlines in the developed world are so ridiculously safe (...but also perhaps why airliner technology has been stagnant for half a century, with only safety and incremental efficiency/cost improvements).
reply