I'm sure it will eventually be safer than human drivers once the technology is there but for a time there are going to be deaths that could've been prevented had the driver not overestimated the autopilot's abilities.
I expect that with time these self-driving algorithms will become safer than an increasing proportion of human drivers. I'm just very wary of assuming they are better than most or all human drivers prematurely.
We as a society have a habit of putting too much faith in technology. Most of us aren't knowledgeable and unbiased enough to assess its risks objectively, particularly in areas where there is a very low chance of something happening but it will be very bad if it does happen.
It needs to be well safer than that. Humans have a higher level of tolerance for mistakes by humans than mistakes by robots made by humans. Every death and injury from self-driving cars is going to be on the news and public perception is everything to this technology.
Fair? Probably not. But safety must be top priority here. We can't look like we're throwing this technology out there and hoping for the best.
And I don't think you can say Tesla's Autopilot 2.0 has been shown to be safer than humans.
Autopilot will get better and it will be used more, and more people will die. At least until full autonomy. Think about all the weird edge cases that are encountered in distributed systems, and now attach a human life and a 5,000lb car to it. The question is if fewer people will die on a per-mile adjusted basis.
Sure, Autopilot is safe 'when used correctly,' but I wouldn't trust someone to maintain 100% attention on driving with it enabled. Maybe for a few cumulative hours. Not for hundreds or maybe thousands of hours. If I'm paying attention and I prevent Autopilot from getting me into an accident, why have it on in the first place? It's supposed to protect me from inattention! (e.g. automatic braking)
As it gets better at protecting people from inattention, people will be less attentive, and more will encounter the edge cases that the machine learning models will inevitably have.
Don't get me wrong, I love SDCs and the massive impact they will have, I just believe that partial autonomy is unsafe because of the human factor.
The facts around current car safety is that it's already really quite good. In modern cars and "autopilot-feasible conditions" you are talking well below 1 fatality per billion vehicle miles travelled with regular human drivers.
This means that if a model has sold 1 million cars they each need to drive 100 000 miles with autopilot enabled before the insurance company has enough statistics to say "this is safer than a human".
there will be deaths and accidents from the computer controlled cars. that is pretty much inevitable. the question is where on the x of y miles driven per accident/fatality does it make sense to accept that computers and algorithms are fallible just like humans but in aggregate the system works better than just humans doing all the manual control inputs. it's the first of its kind and we only have one datapoint w.r.t. the deadliness of autopilot used without oversight but it seems to indicate that the autopilot is safer overall in aggregate.
It's weird to compare the fact that autopilot could add 9's of safety to situations that are already 99.99% safe, which at the scale of "almost all drivers" means a lot of avoided deaths, while at the same time potentially adding risk to situations that with humans in the loop would be much safer.
For example, if it makes deaths on highways go down by 1 in 10k, but now you need to accept the risk of the car turning you onto oncoming traffic at an intersection because it was bad weather.
It's a tough change to accept when the failure hits you personally, and not one I know we should blindly agree.
Here's the thing about autopilot all the way up through self-driving cars:
A perfect system would never have a fatality. EVERY problem that ends in fatality can ALWAYS be traced to some flaw or inadequacy. Every. Single. One.
That is why self-driving cars will eventually be far safer than any human driver (because every fatality can be stopped), and why the natural human tendency will be to crucify any company who attempts to enter the market[0,1], which will mean millions of more unnecessary deaths at the hands of human drivers because we'll delay deploying self-driving technology until it's perfect.
[0]Even if they, accurately, point out the driver is still responsible and that overall safety is improved. Accuracy doesn't matter, emotional resonance does. That's victim blaming!
[1]At the same time, this intense focus--while it's "unfair" given the masses of car deaths every day--is also what will drive the massive improvements of safety. So the inaccurate outcry can actually be a good thing. Provided that original player doesn't give up or go out of business first. This dynamic can help explain why airlines in the developed world are so ridiculously safe (...but also perhaps why airliner technology has been stagnant for half a century, with only safety and incremental efficiency/cost improvements).
This seems to be a reasonable take to me. Systems like Autopilot make driving safer. I'm worried that the conventional wisdom on HN is (a) that systems like Autopilotmust be perfect, which will never be attained, and (b) extremely loathe to recognize the times Autopilot has saved lives.
It could be safer than human drivers and still cause quite a few deaths. We can't evaluate such statistics from one accident. We don't even know if this particular accident would have been avoided by a typical human driver (there was apparently one behind the wheel of the vehicle).
While it's true people are going to die on autopilot, what you need to compare it with is the number of people who would otherwise have caused collisions the autopilot avoided. Statistically if it isn't already safer than the average driver it's very close.
There are 1.5 million heart attacks and strokes in the US every year. Self driving cars might end up being safer even if we were all perfect drivers.
In my view, the question should be whether it's more or less safe than humans driving. If use of automated driving results in less death than humans driving, considered at the broadest possible scale, then that strikes me as an excellent outcome. Instead, each and every death under autopilot results in all the hens clucking as though it's proof the technology has failed. That's ridiculous.
I wouldn't worry so much. I'm sure self driving cars are going to save a lot more lives than they are going to end. Humans are terrible drivers, and the software will only get better.
Is the only way we will accept something like autopilot is if it has a perfect driving record? Is it not enough to be better than humans on average? Thousands of people die in automotive accidents, but we don't think twice when a human is behind the wheel.
"What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still."
As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).
However, something has always bothered me about that comparison ...
Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?
I don't know the answer - I'm torn.
Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.
> But must the standard for safety be higher than existing human drivers?
This is my line of thought as well. Same problem I have with the anti-nuclear environmental crowd. Are there substantial drawbacks? Absolutely.
That doesn't matter. The important question is not "Does this new technology solve all previous problems?" but rather "Does this technology solve a net positive number of problems while offering more opportunities for improvement?"
With decades of empirical data that average humans are pretty bad at reliably manually piloting powerful, heavy vehicles, I can't see a reasonable reason to reject deploying autopilot technology in an aggressive manner. And that sadly means a few deaths. But those should be measured against the number that would have died were nothing done, not against zero.
You don't think the novelty and risk associated with a fully automated system that's still in the experimental stages warrants any immediate caution?
Commercial planes have had autopilot for years, but they still require at least the pilot or co-pilot to sit behind the yoke.
When driverless cars have had a few million or billion hours without statistically significant incidents, I think we'll see the laws adapt to not even requiring a human fail-safe.
In most complex systems the reality is people will encounter harm. The commercial airline industry is a good example of progress through crashing.
In 2017 there were zero reported accidentally deaths [1]. Compared to 1972, one of the most dangerous years to fly, where more than 2,300 accidental deaths occurred on commercial flights.
How did we go from thousands of deaths to zero? Primarily through crashing. Then working to evaluate what went wrong and improve the design, processes, and programs around commercial aviation.
As one pilot explains [2]:
"One of the ways we’ve become so safe was to realize that our efforts were never going to be good enough."
So yes, I think technologies like autopilot will save lives. But it won't be able to do it magically, unfortunately some harm will occur because we—in all our human wisdom—just can never accurately predict all possibilities. It's the same problem most self-driving companies are facing today, and the reason promises of self-driving being here today have fallen short. The world is far, far more complex than we realize... especially when we put a computer out into it and tell it to "go forth."
reply