Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It's weird to compare the fact that autopilot could add 9's of safety to situations that are already 99.99% safe, which at the scale of "almost all drivers" means a lot of avoided deaths, while at the same time potentially adding risk to situations that with humans in the loop would be much safer.

For example, if it makes deaths on highways go down by 1 in 10k, but now you need to accept the risk of the car turning you onto oncoming traffic at an intersection because it was bad weather.

It's a tough change to accept when the failure hits you personally, and not one I know we should blindly agree.



sort by: page size:

"What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still."

As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).

However, something has always bothered me about that comparison ...

Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?

I don't know the answer - I'm torn.

Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.


Why do people think the goal is “perfection” rather than “better than the average driver”? If Autopilot is 10x safer than human drivers, we would have to be stupid not to mandate it much less tolerate it, even if it’s crash rate is non-zero. Human drivers kill 40k people a year in the US alone—that’s almost twice as many people as all homicides combined.

This seems to be a reasonable take to me. Systems like Autopilot make driving safer. I'm worried that the conventional wisdom on HN is (a) that systems like Autopilotmust be perfect, which will never be attained, and (b) extremely loathe to recognize the times Autopilot has saved lives.

> But must the standard for safety be higher than existing human drivers?

This is my line of thought as well. Same problem I have with the anti-nuclear environmental crowd. Are there substantial drawbacks? Absolutely.

That doesn't matter. The important question is not "Does this new technology solve all previous problems?" but rather "Does this technology solve a net positive number of problems while offering more opportunities for improvement?"

With decades of empirical data that average humans are pretty bad at reliably manually piloting powerful, heavy vehicles, I can't see a reasonable reason to reject deploying autopilot technology in an aggressive manner. And that sadly means a few deaths. But those should be measured against the number that would have died were nothing done, not against zero.


Not really. That a human driving in conjunction with autopilot enabled is far safer.

If anyone would read that as the car driving autonomy being far safer we would have fatal crashes every day.


Honestly it’s wild that so many people on this forum are making arguments about theoretical accidents that could happen when Autopilot has billions of miles driven and gets in 80% fewer accidents than the average US driver. The idea that we have to wait for Autopilot to be perfect means accepting hundreds of thousands of millions of accidents (including thousands of preventable deaths) that could be avoided. I find that attitude terrifying.

I'm sure it will eventually be safer than human drivers once the technology is there but for a time there are going to be deaths that could've been prevented had the driver not overestimated the autopilot's abilities.

Autopilot will get better and it will be used more, and more people will die. At least until full autonomy. Think about all the weird edge cases that are encountered in distributed systems, and now attach a human life and a 5,000lb car to it. The question is if fewer people will die on a per-mile adjusted basis.

Sure, Autopilot is safe 'when used correctly,' but I wouldn't trust someone to maintain 100% attention on driving with it enabled. Maybe for a few cumulative hours. Not for hundreds or maybe thousands of hours. If I'm paying attention and I prevent Autopilot from getting me into an accident, why have it on in the first place? It's supposed to protect me from inattention! (e.g. automatic braking)

As it gets better at protecting people from inattention, people will be less attentive, and more will encounter the edge cases that the machine learning models will inevitably have.

Don't get me wrong, I love SDCs and the massive impact they will have, I just believe that partial autonomy is unsafe because of the human factor.


When autopilot kills more innocent drivers than other drivers you can point me to a problem. Time will tell if it is better or worse and the track record up until May/June was going pretty well.

I'd rather 10 innocent people die to freak autopilot incidents than 1,000 innocent people die because people in general are terrible at operating a 2,000-3,500lb death machine. Especially because everyone thinks of themselves as a good driver. It's fairly obvious not everyone is a good driver - and that there are more bad drivers than good.

Maybe I only see things that way because there have been four deaths in my family due to unaware drivers which could have easily been avoided by self-driving vehicles. All of them happened during the daytime, at speeds slower than 40mph, and with direct line of sight. Something sensors would have detected and braked to avoid.


"Sadly very good drivers die every day and not only because of someone else's mistake."

Indeed, and that may often be because of taking warranted risks. It bothers me that when making comparisons of driving safety people tend to suffer from absence blindness and discount other important things like the death avoidance cases. Let's say you have a bleeding injured person which requires urgent access to a medical facility. Here the driver can take some risks in order to save a life. Or any number of causes that may warrant risk taking. Now, take away the control from that driver and leave it to an autopilot that may compute the driving parameters in order to satisfy minimum pollution, safety (from the manufacturer's judicial liabilities prospective), and whatnot. Heck, I foresee cases when the autopilot won't even approve any movement due to whatever considerations when there may be passengers in risk of loosing their lives if won't reach somewhere soon enough. For now people can take risks, which may be both good and bad. Don't look only at the bad side.


And yet air travel is far safer than road travel and they embraced autopilot long ago.

The problem with complete automation is how accidents are perceived. Probably a good thing considering how low the bar is right now for road safety. Being killed by a robot's mistake is perceived far worse than by human error.

I think a middle ground like autopilot on the highway could significantly improve safety while still accommodating perception issues.


Yeah, I fully expect Autopilot to have different failure modes than human drivers, but what I’m interested in is the different fatality rates (deaths per hundred million miles, adjusted for different types of roads i.e., highway vs city streets). If Autopilot can save hundreds of lives annually to human-error mistakes like falling asleep at the wheel, etc but at the cost of one life annually due to obscure failure modes like driving toward a train, I maintain that we should not only allow Autopilot, but probably even mandate it on new vehicles. Sacrificing hundreds or thousands of lives annually because we don’t like the specific failure modes seems absurd. Of course, if it doesn’t save lives, then we should block its deployment on those grounds (but the particular kind of failure mode shouldn’t affect the calculus).

The facts around current car safety is that it's already really quite good. In modern cars and "autopilot-feasible conditions" you are talking well below 1 fatality per billion vehicle miles travelled with regular human drivers.

This means that if a model has sold 1 million cars they each need to drive 100 000 miles with autopilot enabled before the insurance company has enough statistics to say "this is safer than a human".


In my view, the question should be whether it's more or less safe than humans driving. If use of automated driving results in less death than humans driving, considered at the broadest possible scale, then that strikes me as an excellent outcome. Instead, each and every death under autopilot results in all the hens clucking as though it's proof the technology has failed. That's ridiculous.

Here's the thing about autopilot all the way up through self-driving cars:

A perfect system would never have a fatality. EVERY problem that ends in fatality can ALWAYS be traced to some flaw or inadequacy. Every. Single. One.

That is why self-driving cars will eventually be far safer than any human driver (because every fatality can be stopped), and why the natural human tendency will be to crucify any company who attempts to enter the market[0,1], which will mean millions of more unnecessary deaths at the hands of human drivers because we'll delay deploying self-driving technology until it's perfect.

[0]Even if they, accurately, point out the driver is still responsible and that overall safety is improved. Accuracy doesn't matter, emotional resonance does. That's victim blaming!

[1]At the same time, this intense focus--while it's "unfair" given the masses of car deaths every day--is also what will drive the massive improvements of safety. So the inaccurate outcry can actually be a good thing. Provided that original player doesn't give up or go out of business first. This dynamic can help explain why airlines in the developed world are so ridiculously safe (...but also perhaps why airliner technology has been stagnant for half a century, with only safety and incremental efficiency/cost improvements).


While it's true people are going to die on autopilot, what you need to compare it with is the number of people who would otherwise have caused collisions the autopilot avoided. Statistically if it isn't already safer than the average driver it's very close.

There are 1.5 million heart attacks and strokes in the US every year. Self driving cars might end up being safer even if we were all perfect drivers.


For an individual person to be willing to adopt autopilot on safety grounds, autopilot has to be safer than that individual person's driving, not just safer than the average person's driving.

If your driving skill is above the mean (because you're never a road rager, speeder, etc.), then you're worse off using Autopilot, even if Autopilot is as safe as the average driver.

Since most people probably consider themselves above average drivers (and in fact most people may be above the mean if the bad drivers are outliers), this limits the number of people who will believe that Autopilot makes them safer.


So as a thought experiment, if Autopilot + Human intervention reduced the rate of accidents by 50% vs. just humans after normalization; can we consider autopilot to be adding value?

The stats you give for human-driven deaths are for all conditions and all roads. Autopilot does not work in all conditions and all roads, so it’s not really a fair comparison.

Further to that, human deaths will be skewed towards drunk drivers and other “bad” drivers. Autopilot deaths will be “random”, so public perception will be worse even if the figures were exactly the same.

The other problem is complacency - the Apple software engineer had 2 seconds from his Tesla leaving the road to it hitting the gore point - and this was a bug he was aware of. If the car looks like it’s 100% infallible then peoples attention will naturally waver. How long does it take to react to a situation? Feel the weight of the steering again? Calculate the appropriate correction?

I think the system that replaces human drivers has to be 100 times better in terms of fatalities in order to be accepted by society.

next

Legal | privacy