> When Tesla or any other company can prove to me that their cars are safer than human-driven vehicles, I won't oppose them on my roads.
How can they be expected to prove their safety record to you without driving hundreds of millions of miles on public roads? They're already provably safer than humans in artificial "test course" scenarios.
> they'll be objectively better at X than humans, without any evidence to backup the assertion.
Google's road tested self driving car is already safer[1] than a human driving. Suggesting that a computer will be a more reliable processor of data and computer of maths than a human is not something which needs data to back it up. The ability of drivers is so variable and in the aggregate there are so many of them that it's almost self-evident that a self driving car which crosses a very low threshold for entry ("being road legal") will be better than a human.
> They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.
Nobody is saying that they will any more than people argue that autopilot on a plane will. It's very plain to see that right now, as of this second, there is a self-driving car which is safer than a human driver. It is not yet legal to buy, but it doesn't change the fact that it's safer. It may be that a bug crops up which kills a few people. But that doesn't make it less safe, it makes the cause of death for some of the users different to "human error".
> But what if self-driving cars were 10x safer than human-driven cars?
But there is no evidence of this, or that they're even as safe as human drivers. In fact, Teslas are involved in more accidents than other luxury cars.
Conversations I've had where people have told me that self-driving cars will need to be 100% perfect before they should be used. Ironically, one of those people was an ex-gf of mine who caused two car accidents because she was putting on makeup while driving.
Anyway, based on Google's extensive test results, I'm pretty sure self-driving cars are already advanced enough to be less of a risk than humans. Right now, the sensors seem to be the limiting the factor.
> There is no data to make this assertion about computer driven vehicles.
Today that is true. I am speculating when I say that autonomous vehicles will be safer than humans drivers. There is enough effort going into developing them that we will soon have that data though.
Google's data is already showing promise that the autonomous car is likely to be safer, but they don't have enough miles to be significant.
> There is little basis to declare human drivers 'fundamentally unsafe' given the proportion of accidents pales in comparison to the gigantic volume of vehicles on roads worldwide over varied road and traffic conditions.
10.6 deaths per 100,000 people is enough for me to call humans fundamentally unsafe. (though cancer and other medical conditions are bigger)
Since humans cannot be equipped with the sensors a computer can (ie eyes on the back of your head) it isn't fair to compare the as if situation.
> I think the problem we have with self-driving cars is more social than technological at this point.
> it's a hypothetical so give me some leeway on this!
IMO you should not base (and broadcast) your opinions about safety on hypothetical statistics. I don't even believe it's true that overall statistics show self-driving is safer than humans. IIRC prior reports showed that companies were selectively picking statistics about safety.
> And for the entire fleet of autonomous cars, the system does not need to be safe. It just needs to be better than humans driving.
I really hope this is true. I really want self-driving cars everywhere, now. Road Traffic accidents kill very many people and this is an example of Google using money and smarts to do good.
But looking at the way people deal with risk makes me wary.
People drive cars all the time even though driving is a bit risky. People don't really understand how good or bad their own driving is. There's a bunch of cognitive biases and rationalisations.
I hope people who are experts in communications are ready to dispel the FUD backlash against self driving cars.
>This is why self-driving cars still elude us, but will be so much safer than human drivers when they finally do arrive.
Isn't that just a by definition thing?
My Tesla can already drive itself, I don't trust it and I don't think it's safe. When I get one that I trust and is safe then it will be safer than me, and by extension other humans.
The problem is that the current generation of algorithms are brittle.
>> At some point data will win, right? Have you worked out in your head when that will be?
According to a Rand corp. study I quote in another comment, it will take millions or billions of miles and decades or centuries before we can know self-driving cars are safe, simply from observation:
Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.
Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.
That's statistics, btw, not fear. Data, like you say, will "win" at some point but that will be far in the future. Until then we have a few thousand systems whose safety we can't measure but are available commercially (and sold under pretense of improving safety).
Note also that before Tesla started selling cars with "autopilot" there was no way to know how safe it would be, in advance. That is not the behaviour of a company that cares at all about safety, other than as a marketing term.
> On the subject of danger. Literally everything we do is dangerous to some degree.
I don't disagree with your take and I'm a self driving car proponent, but I'm worried about what process we take to get there.
One thing I've taken away from the pandemic is that people seem to have no problem imposing their tolerance for risk on others. Seems like we are on a path to play this dynamic out again in how self-driving cars come to market unless that safety profile is really well controlled and understandable.
Even if at a population-level self-driving is slightly safer statistically than person-driving, there are enough edge cases to give me pause right now, and at the individual-level it may raise my risk either as a pedestrian or driver and certainly changes what is predictable behavior [1].
>> There are already videos of assisted-driving cars avoiding accidents
I'm not concerend about assisted driving. I'm concerned about fully autonomous cars being handed complete control in real-world situations.
I'm not just "concerned" about them- they scare the hell out of me.
>> Many being scenarios that Google's self-driving cars have the technology to avoid.
What Google says it programs its cars to do, what it really programs its cars to do and what its cars can really do are all separate things.
The problem is that the current technology level is nowhere near advanced enough to allow fully autonomous vehicles to operate safely. There is a huge number of situations that those cars aren't programmed for, that they can't be programmed for, because those situations are completely unforseeable.
Machine learning has a huge problem with data sparsness. You may train a learner in petabytes and petabytes of data collected from the real world and still miss the vast majority of events that may occur.
That is why, like I say elsewhere, machine learning-based AI makes utterly ludicrous mistakes that humans would never do, even in difficult situations were they can't be expected to perform with 0% error. I've used a few metaphors- here's another one: a human would never mistake a truck for a cucumber. A machine learning algorithm, might.
And what's worse, there's no way to prevent this sort of mistake, or even correct it, because most of the time the models built by such algorithms are simply too complex to be processed by humans in the way a hand-crafted system would be (and goddess knows how hard those can be to process).
> For self-driving cars to be safer than human drivers, there is no requirement that the self-driving cars should be better/safer than the best human driver... the self-driving car simply needs to be safer than the majority of humans.
That is true on a whole, but not true for ME. It needs to be safer than ME, not some hypotehtical average person.
Further compounding it:
> For driving skills, 93% of the U.S. sample and 69% of the Swedish sample put themselves in the top 50% [0]
> all self-driving cars have to do is be a little bit better for net safety to improve.
That's true abstractly but ignores several important real-world factors about the adoption of self-driving cars.
On the one had, autonomous cars have to be a lot better than humans to prevent these sorts of PR trash can fires or they won't be given the opportunity to improve net safety.
On the other hand, people are so bad that we're liable to soon live in a world of autonomous cars, regardless of the effect on net safety.
I hope they can be made safe, because it's vital for the future of our car-obsessed culture. But I don't have as much faith as you.
> When it works most of the time, it lulls you into a false sense of security, and then when it fails, you aren't prepared and you die.
That still doesn't _necessarily_ imply that 'partially self-driving cars' are worse than actually existing humans. Really, anything that's (statistically) better than humans is better, right?
I don't think it's reasonable to think that even 'perfect' self-driving cars would result in literally zero accidents (or even fatalities).
>I'm personally more towards consequentialism and therefore are pro Tesla self driving, if per-mile it is statistically safer than a human driver.
For me to accept Tesla FSD it should be several orders of magnitude better than a human driver. I don't want an 'average driver' driving me to work. The average driver is involved in 6 million car accidents per-year.
Humans just don't believe in autodriving, and there's still the problem of liability assignment - both aren't a technology problem imho, but a human impediment to adoption.
>> If we can have self-driving vehicles that are twice as safe as the average human, then they should be allowed on the streets.
Absolutely disagree with this approach. Self driving cars can't be merely slightly better than your average human - they need to be completely flawless. Just like an airplane autopilot can't be just a bit better than a human pilot, it needs to be 100% reliable.
> E.g. a Tesla driving under a semi because it thought it was a billboard.
People don’t like what they can’t understand. Most of the accidents caused by autonomous vehicles are weird and probably would have been avoided by human drivers. However, this isn’t the whole picture, because those same vehicles also avoided a bunch of accidents that humans would have caused.
My interpretation of the data is that autonomous vehicles are already statistically safer than human drivers.
Problem is, humans are emotional beings who aren’t always capable of pure rationale. If you tell someone they’re statistically safer in a Tesla, but it might accelerate into a wall at 60mph, many people won’t take that risk.
You can define it however you want. By any common sense definition of safety Tesla has not proven that their autopilot is 'safer' than a human driver.
>10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles.
This supposes a couple things. Mainly that autonomous vehicles will become safer than human drivers, that you know roughly how many humans will have to die to achieve that and that those humans have to die to achieve it. Those are all unknown at this point and even if you disagree about the first one (which I expect you might) you still have to grant me two and three.
All data that I've seen so far supports the conclusion that self-driving cars are already much, much safer than humans.
http://bigthink.com/ideafeed/googles-self-driving-car-is- ridiculously-safe
> When Tesla or any other company can prove to me that their cars are safer than human-driven vehicles, I won't oppose them on my roads.
How can they be expected to prove their safety record to you without driving hundreds of millions of miles on public roads? They're already provably safer than humans in artificial "test course" scenarios.
reply