Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> technology tends to fail in 'silly' ways

And a scenario we can easily imagine is that a buggy update goes out to the whole fleet overnight that starts killing people all over the place.

The common case of accidents being on par with manual human driving goes out the window until the software is rolled back and for 12 hours, 24 hours, however long, we get a number of deaths that far outpaces what humans are capable of. The "worst case" would never apply to a manual/human population as a whole, at once.



sort by: page size:

> The human accident rate is about one per 500K miles, so if they were able to get in that range, then yes, they would have succeeded; drivers would be able to stop paying attention to the road without putting themselves and others in danger.

Unfortunately, I expect that automation will be held to a higher standard than human drivers, rather than the same standard. When an accident happens, people want to know who to blame, and "an unimpaired human driver" gets somewhat more latitude for a genuine accident, while a piece of software is always going to be perceived to be at fault. And conversely, people (somewhat validly) want to have more control: every driver thinks they're above average, and the software won't be as good as their accident rate, and if something happened at least they were in control when it happened.

I don't necessarily even think those are incorrect perspectives; we should hold software to a high standard, and not accept "just" being as good as human drivers when it could be much better. But at the same time, when software does become more reliable than human drivers, we should start switching over to make people safer, even while we continue making it better.

(Personally, I wish we had enough widespread coordination to just build underground automated-vehicles-only roads.)


> The moment the first human casualty occurs do to some software/hardware failure, there will be serious ramifications for the self-driving world.

Just want to add to the discussion a reminder that this is said about literally every new piece of technology that has ever happened- the first human casuality from riding a horse, first human casualty from driving a car, etc.

It's interesting to note that we're expected to lose our minds over the first human death with New Technology Y, when Old Technology X is typically killing humans all the time.

Also out of curiosity– re: longevity, why would a driverless car have to be on the road for 15 years? Why not just update/retrofit/repair/replace every year or so?

Spacecraft and military equipment are used in really extreme settings! Also can't driverless cars update themselves, go for repairs themselves, etc?

Also I'm guessing it wouldn't make a lot of sense for regular folks to own their own driverless car. They'll just use one at the touch of a button whenever they need to. When renting becomes THAT cheap, ownership makes less sense (for most people, at least).

My $0.02


> If machines fail at complex tasks but fail safely then that's not an argument against self-driving cars being safe, it's only one about their limited usefulness.

They don't, not at the moment. The software detects the situation is outside its capability space and says "jesus, take the wheel!".

For all practical intents and purposes the only reasonable way to interpret the current self-driving car statistics is to treat every single human intervention as a would-be accident. That rate is somewhere in the 1 per 10000 km ballpark. That's - for now - orders of magnitudes worse than human drivers.


>Someday, all cars from a particular brand will be made to crash during rush hour.

This is also why it's always very, very wrong to compare potential faults of automated cars to humans as in "the automated car is X percent safer!", becuase it ignores the fact that mistakes in automated systems, at least as they are built now, are highly correlated.

If there is one bug in an ML system that is rolled out to an entire fleet that results in an unknown weather condition leading to fatal crashes you may create mass carnage.

Human driver errors are not correlated like this, which makes them much more robust as an ecosystem.


>Through adoption of autonomous vehicles, many predict we will drastically cut the number of fatalities.

Who are these many people, and why should we believe their predictions?

> We’ve proven to be perennially distracted, we have terrible reaction times, we have extremely narrow vision, we panic in situations instead of remaining calm, etc. and yes, these faults do lead to the deaths of children.

We've also proven that all software has bugs, and developers keep introducing new bugs in every single release. There is no reason to think that self-driving car software will be any different. Whats worse is that when software is updated, these bugs will now be pushed out to tens of thousands of cars - instantly.

Bit much to call someones position nonsense when they're just skeptical of obvious stuff :)


>And the systems fail in ways that a human wouldn’t which makes it more dangerous

Not necessarily, because humans fail in ways that Tesla autopilot wouldn't.

1.5 million people are killed in road accidents every year. 54 million people are injured.


> It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe

This is a bit of a generous assumption. It very well could turn out that self-driving cars are worse, or no better, than human drivers.


>>with safety statistics better than humans

The problem is that will not be the expectation. It will need to be perfect or near perfect.

full self driving cars can drive a million of miles with no accident, but if on 1,000,001 miles there is a accident with a death it is all over the news, and people proclaiming the technology is terrible, unsafe and not ready. In that same 1,000,001 miles humans have caused far more damange and death but that is just "normal" so....


> Or it can drive over a toddler or pin someone against a wall until they die.

There might be situations where this could happen but humans have such a bad track record when it comes to driving/parking/paying attention in general that i'd trust a good autopilot over the average person. There will be bugs but at least we're able to work on them. Improving human driving skills seems unfeasible with any other current technology.


>Why do you assume that drivers have to learn from each other's mistakes?

Because that's what computers do: we can program them to avoid a mistake once we know about it, and then ALL cars will never make that mistake again. The same isn't true with humans: they keep making the same stupid mistakes.

>A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again

Why do you think this? You're assuming the car's software will never be updated, which is completely nonsensical.

>10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.

Only in the short term. As soon as they're updated to avoid that mistake, it never happens again. Hum


> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.

I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.


> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead

Not dead, which I feel is important to point out. Involved in an incident, possibly a collision or loss of lane, but really it's quite hard to get dead in modern cars. A quick and dirty google shows 30,000 deaths and five and a half million crashes annually in the US - that's half a percent.

So in your hypothetical the computer drives 99% of the time, and of the 1% fuckups, less than 1% are fatal.


> Would you still want one if the data showed it was more dangerous than driving yourself around? We don't yet know if it will be

While we might not know yet whether self-driving cars are /already/ safer than human drivers, it is virtually impossible that they won't become much safer very quick. Simply because bugs in software can (and will) be fixed, each car gets an update, and the accident in question won't happen again, ever. Humans, on the other hand, cannot be updated easily, if at all.


> The real question is if society can handle the unfairness that is death by random software error vs. death by negligent driving.

Most people have a greater fear of flying than driving by car although statistically you're far more at risk in a car. One cause of that fear of flying is loss of control; you have to accept placing your life in someone else's hands.

With self driving cars suspect lack of control will also be a problem. Either we need to provide passengers with some vestige of control to keep them busy or we just wait a generation until people get used to it


> Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.

This is a fallacy. People don't just look at safety statistics. The actual manner/setting in which something can kill you matters a ton too. There's a huge difference between a computer that crashes in even the most benign situations and one that only crashes in truly difficult situations, even if the the first one isn't any more frequent than the second.

Hypothetical example: you've stopped behind a red light and your car randomly starts moving, and you get hit by another car. Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason. Would you be fine this car? I don't think most people would even come close to finding this acceptable. They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.

P.S. people are also more forgiving of each other than computers. E.g. a 1-second reaction time can be pretty acceptable for a human but, but if the computer I'd trusted with my life had that kind of reaction time it would get defenestrated pretty quickly.


> When it works most of the time, it lulls you into a false sense of security, and then when it fails, you aren't prepared and you die.

That still doesn't _necessarily_ imply that 'partially self-driving cars' are worse than actually existing humans. Really, anything that's (statistically) better than humans is better, right?

I don't think it's reasonable to think that even 'perfect' self-driving cars would result in literally zero accidents (or even fatalities).


> I don't understand how something this broken is allowed to operate on public roads.

Also out in a rural area. Running out to pickup lunch a few minutes ago, a young man flipped their old pickup truck on its side in an intersection, having hit the median for some reason. I too don't understand how humans are allowed to operate on public roads. Most of them are terrible at it. About 35k people a year die in motor vehicle incidents [1], and millions more are injured [2]. Total deaths while Tesla Autopilot was active is 7 [3].

I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that, not because the software is good but because we are very bad in aggregate.

[1] https://www.iihs.org/topics/fatality-statistics/detail/state...

[2] https://www.cdc.gov/winnablebattles/report/motor.html

[3] https://www.tesladeaths.com/


> Software is only is good as the guy programming it.

I guess chess computers can never beat a human then. Because the software is only as good as the guy writing it.

> What happens when the software develops a bug? Be it in the middle of rush hour traffic, or hurling down the highway at 70mph?

What happens when a human is inexperienced/texting/drunk/sleepy? Humans have plenty of "bugs" but you don't notice them.

> For all of the great things we know self driving cars can potentially bring to the table, how do we account for the issue of buggy software?

Trials. Testing. Every accident (which is inevitable) will be scrutinized and used to improve the system (unlike the 40k annual deaths in the US where that unfortunate experience helps nobody, except perhaps installing a traffic light or barrier).


> How many people are killed every day by human drivers? Would it be better if all motor vehicles switched to autonomous and the death by automobile accident rate drastically lowered, but was still greater than zero?

The problem with this is the assumption that autonomous motors are good enough that switching everyone over would even be a net win. Accidents are already rare per mile, so evaluating safety is hard. And the makers of the autonomous vehicles are incentivized to fudge their data as it's potentially a massive market.

That's never mind the fact that there's no clear quality control on what counts as autonomous. As an extreme example, a shitty Arduino app hooked up to servos and a single webcam could count and it almost certainly would not be safer than humans.

next

Legal | privacy