Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>That being said, I still want this tech on the road ASAP, not because it's perfect, but because human drivers are So. Fucking. Awful.

This narrative is so pervasive and I must admit a bit strange to me. Millions and millions of people strap themselves into thousands of pounds of metal and then drive 80 down the highway while texting and changing the radio and telling your kids to please stop fighting surrounded by other people doing the exact same thing. And we do that every single day and almost all of us are fine and will never have a 'major' accident. Humans are pretty fucking great at driving.

Besides to say that human are bad (or good) at driving would require some comparison, we're bad at it compared to what? As far as we know humans are the best drivers in universe. There's no evidence that self-driving is currently better than human drivers and there's no evidence that self-driving cars will soon be better than human drivers and there's no evidence that if we allow tons of self-driving cars on the roads then they will be better than humans. It's just blind hope, it really makes me want a word for whatever the opposite of a luddite is, instead of a belief that we can destroy machines to benefit people we believe we can destroy people to benefit machines. It's all twisted up.

If you want to be an Uber or Tesla crash test dummy and walk/bike across a test track while they see if their vehicle can avoid you then by all means go for it. But don't just foist that upon average people.



sort by: page size:

> Plus, the additional sensors available and the software response time mean autonomous drivers can outperform even sober, attentive, skilled human drivers in most situations.

Self driving cars are a hard problem, a very hard problem. So is CAPTCHA and machines are still not that great at solving them.

The problem space of autonomus vehicles is a large one, and even with how many mistakes human drivers make, I still don't think we're anywhere near the tech to make safe autonomous driving vehicles. We're at a minimum, 15 years off.

I want you to think about all the technology we use every day. Our phones, desktops, laptops, are full of bugs. Hell I found bugs in the ticketing system I buy metro cards from the other day. We see display kiosk crash all the time.

A Tesla car with lane assistance (please stop calling it auto-pilot) recently drove a car into barrier. Another kept driving for hours without hands on the wheel (fixed in an update. Was that driver passed out? No way to know.)

Now you might say, this is different. The engineers working on self-driving tech are at the level of the ones that build bridges, or make pacemaker software (which by the way, some were found to have a bluetooth vulnerability recently) or plane avionics.

Maybe, but Uber certainly isn't. They're a culture of scum shit shop. This isn't a small thing here. It's a huge, huge problem space that no one completely understands yet. Why are there not any self-driving cars on the streets of NYC or Chicago? Because it's difficult, and dangerous and if there were, people would be dead right now.

For all the money we're investing in self driving cars, we could build tech that already works. Singapore and London both have fully autonomous trains; some that arrive at 30 second intervals and carry millions of people a day. America needs to get its normal infrastructure back before we go down this fantasy world of self driving cars.

I wrote more about this a while back:

https://penguindreams.org/blog/self-driving-cars-will-not-so...


> I feel self driving cars need to do everything humans do and then 10x better to succeed

I hope that's not the case. Even if self-driving cars are only as good as the median human driver, a universal rollout of that tech would save countless lives. I really hope that when the first self-driving fatality happens the media or public fear doesn't blow it out of proportion.


> This is why I am afraid of the hype behind self-driving vehicles.

Well, they don't have to be absolutely correct. Just a lot more correct than human drivers across a broad range of driving scenarios to justify replacing human drivers.


> see self driving cars run into solid and stationary objects, but human drivers do that all the time too.

Human drivers who are distracted do that all the time, AP is supposed to avoid that, it is supposed to be alert all the time, but when it sees a stationary object, the result of their algo is, "it must be a sign we can somehow go through"

We need to make it very clear that self driving cars are better than driving drunk, or when you haven't slept in 24+ hours, but if you are a driver who pays attention, don't use this tech.

And it is not that I don't want the tech to take over the world, I wish I could just put my kids in a self driving car and have the car take them to the school that is 4 miles from home, but we are nowhere close to that, even with me in the driver seat, if I only have seconds to take over before I end up on a ditch or worse.


> That is kind of deal breaker for me, when talking about 1+ ton metal box moving me around I will take "slow to market but not blind" instead.

The way Elon Musk tells it, they put it into production once it was safer per mile than human drivers. Which seems like a legitimate point.

You use something when it's better than what you would have to use in the alternative, not when it's 100% perfect with no possibility of ever making a mistake under any circumstances. Which is probably not even actually possible to do.

The problem is if human drivers kill more than 30,000 people in a year, it's not news, because it's been that way forever. But one autonomous car kills one person and it's the top story.


> That being said, I still want this tech on the road ASAP, not because it's perfect, but because human drivers are So. Fucking. Awful.

And yet, so far, Uber's self-driving vehicles have a higher fatality rate than the status quo.

Sure, it could be improved over time, but the whole point of TFA is that Uber is currently not doing the requisite diligence and testing with their vehicles to ensure safety.


> I find it ironic that the human driver is still at large

Ironic but completely expected. We have normalized and accepted the awfulness of human drivers to a degree that people just don't comprehend. Human would be terrible drivers even if you ignored the fact that many of them are chemically impaired, even if you ignored the fact that they are prone to rage and aggression.

As we consider the safety of self-driving vehicles, there is going to be a disorienting amount of cognitive dissonance as we are forced to confront the awfulness of what we already accepted and have been living with for a century. There were over 42,000 deaths due to motor vehicle accidents in 2022. That means if we created self-driving cars that were twice as safe as humans, they might save 21,000 lives per year and also kill 21,000 people per year. That sounds insane, but it would be a sane way of improving an insane situation.

Kibitzing about these individual incidents is a normal and inevitable human way to try to deal with the problem, but we need some way of measuring how deadly self-driving cars are in comparison to how deadly human drivers are. I never see that, so I wonder how we're going to know when we should loosen the reins on them? Are regulators actually doing their jobs, or are they just going to move inexorably forward while occasionally throwing bones to public outrage?

The stakes here are huge, tens of thousands of deaths and horrific injuries per year, and we will unnecessarily kill lots of people if we deploy self-driving technology too fast or too slowly.

The way this will probably work out in practice is regulators choosing the path of least resistance between uninformed public outrage and greed-driven industry pressure. What we should be asking for is a data-driven approach.


> My gut feeling is we're perfectly happy with self-driving cars that are even substantially worse than human-piloted ones, so long as they crash the way humans do. Rear ending in the dark, okay. Barreling full speed into a highway divider [1] or dragging pedestrians to get out of traffic [2], not.

No. I’m very not ok with this. Many human drivers are terrible. Most collisions are entirely avoidable, and I certainly don’t want more of the same from computers. If I keep getting my life put at risk more often by incompetent driverless cars, I’m going to be pissed! Maybe people that spend their time in cars agree with you, but as a cyclist and pedestrian the common accident modes are the ones that pose the greatest risk to me. Pinball off the jersey barricades all you want, but stop passing bikes too close, and plowing through crosswalks just because you can get away with it.

I’ve been hit twice on my bike, both times 100% the driver’s fault. The stakes are life and limb for me, so I absolutely do not accept a higher accident rate.


>Most of us are going to expect nothing short of perfection from these machines to really trust them.

Now, expecting perfection, and especially giving up 'better than what we have' in exchange for perfection is a bad thing, but but I do think it's completely reasonable to expect the self-driving cars to be way better than an average human, even better than the best humans, just because we lose so many people to auto accidents every year.

I think it's completely reasonable to want to reduce the risks associated with transport, and I think the only politically possible way to do this is with self-driving vehicles, because I don't think it's politically possible to remove the worst half of drivers from the road without offering them similar levels of mobility.


> Has anyone considered the possibility that people won't want to trust their lives to self driving cars?

Has anyone considered that the average driver sucks? And that 50% of the drivers are worse than that?

Self-driving cars have a very low bar to clear (better than average human), and they will probably clear that fairly soon given just how bad normal drivers are.

Once self-driving cars are deployed, it's simply a matter of crunching the data on more and more unusual situations.


>I think it's safe to assume that this will drastically reduce driving related injuries and deaths.

This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.

Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.

Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.

From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..


> In their current state self driving cars would kill far more people than humans if all cars used self driving 100% of the time, and no one knows if we will ever move past that.

Really? No one knows if we will ever improve on self-driving cars than the current technology today? I'm going to stick my neck out and say "Yes, I know that will move past that."


> we unconsciously demand AI should be better than average human performance.

Yeah I agree it's an unfair demand.

Especially given how much more powerful human brains are than computers we should perhaps be having a go a humans for not trying hard enough.

The wins of things like Go and Chess by computers has been down played because humans 'only' learned that stuff 100,000 years ago.

Personally I think that driverless cars work better as passive systems that augment humans for the moment rather than the dodgy crossover that is Autopilot. I think that car AIs can be trained to deal with extreme circumstances by running simulations of crashes millions of times over and then they're capable of taking over if the driver ever becomes unwell or hits black ice.

But this is all temporary, as soon as their vision systems match humans they will only ever improve over what we have. This Stanford self-driving car sliding between four perfect donuts is amazing [0].

[0]: https://youtu.be/LDprUza7yT4?t=31m38s


> When it works most of the time, it lulls you into a false sense of security, and then when it fails, you aren't prepared and you die.

That still doesn't _necessarily_ imply that 'partially self-driving cars' are worse than actually existing humans. Really, anything that's (statistically) better than humans is better, right?

I don't think it's reasonable to think that even 'perfect' self-driving cars would result in literally zero accidents (or even fatalities).


> The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason

This, I think, is the thing that people miss when they say "self-driving cars don't need to be perfect, they just need to be better than human-drivers, who aren't actually all that great".

From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, one one-hundredth as often as human drivers; as soon as you see a self-driving car kill someone in a situation that a human driver obviously would have avoided (like in the adversarial image kind of scenario), you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."


> I think I get less interested in self-driving cars the closer we seem to get to them. It honestly just stresses me out.

That's probably a reasonable response. These things have the nasty feature that they both require less input from the driver the more sophisticated they get, but also that they require more _attention_ from the driver the more sophisticated they get (a boring old cruise control system won't abruptly swerve into a concrete road divider, but a high Level 2 system will). And humans just don't work like this. "Don't do anything, but be ready to take control within a second or so when it does something insane" is not something we're good at.


> It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe

This is a bit of a generous assumption. It very well could turn out that self-driving cars are worse, or no better, than human drivers.


> The Machine acted in a way that was not expected by the human driver.

This is why I've been saying it's not enough for a self-driving car to just be better than the "average human driver". It needs to be way better, maybe 10x better. Because these cars are likely not going to drive the way we expect a human to drive in many situations, which could end-up causing more accidents due to this conflict with how humans actually drive, even if the car is "technically" as safe as an average human driver.


>Sorry, what is this based on?

Conversations I've had where people have told me that self-driving cars will need to be 100% perfect before they should be used. Ironically, one of those people was an ex-gf of mine who caused two car accidents because she was putting on makeup while driving.

Anyway, based on Google's extensive test results, I'm pretty sure self-driving cars are already advanced enough to be less of a risk than humans. Right now, the sensors seem to be the limiting the factor.

next

Legal | privacy