That's a big if. And as discussed recently on HN, it can't just be safer than average human drivers, it has to be safer than good human drivers. That is a pretty tall order.
As long as it's statistically better than a human driver, it's all good. Human drivers fail all the time, resulting in mass casualties in every single country.
I'm sure it will eventually be safer than human drivers once the technology is there but for a time there are going to be deaths that could've been prevented had the driver not overestimated the autopilot's abilities.
But that surely is the description of 1% of the drivers. It should be better than the 1% worst drivers. It should be demonstrably safer than humans on average with a good margin.
It still could be safer than most human drivers. We only have a few examples of their AI misbehaving. If it consistently crashed into the barriers, we would have thousands of deaths per day.
But I think the model has already been proven out with many millions of miles driven and very few accidents. The tech is here. It isn't mainstream, but it is here and it is better than humans in most conditions.
I mean this is what I'm getting at, if by important measures like preventing deaths, injuries, and damage it's statistically better than humans (and there's a whole discussion on how you collect and decide on that data), even if the behavior is weirdly unlike a human driver, is that success? Is that good enough?
I'm not asserting it even, just spitballing because this is fascinating to me.
So, it just needs to be (significantly) better than the average human. I think it's even better because each time an unexpected crash happens all cars can learn. We can't even teach people not to drive drunk.
Is it that scary to not know how and when it can crash? You don't think about being blindsided at each intersection...
You're right. It doesn't need to be perfect. It only needs to be better than humans on average. (How much better it needs to be is debatable, people react differently to accidents caused by machines than accidents caused by humans, but that's a different discussion.)
So this line in the report should probably concern you:
> FSD Beta v10 committed one likely collision every 36:25, or 475 per year. This is 8,506 times higher than the average accident rate for human drivers used by the auto insurance industry, which is one accident every 17.9 years.
I don't know if it's fair to compare what they consider a "likely collision" to what the insurance industry considers an "accident". Maybe the analysis is bogus on that grounds. But your statement isn't an argument, since the analysis itself doesn't expect self driving to be perfect.
Although it also needs to look at when it decides to hand back control. Its all well and good saying its 'safer' because it makes less mistakes than a human driver would doing the same amount of miles but if it only is activatable in locations/conditions that are considerably easier to control then its somewhat incomparable.
I absolutely disagree. It can't be better than humans. It needs to be flawless. Any problems where people get killed will delay public acceptance of the technology by decades, even if "statistically it's safer than humans". People don't give a damn about statistics, they give a damn about tabloids shouting "self driving cars kill another innocent person!!!". We literally can't afford that.
It works most of the time but the issue is that "most of the time" is not good enough for these systems. Even if the failure rate is <1% that may end up being lots of accidents and deaths at scale.
People often make arguments that "oh it will still be less accidents than human drivers", which is true, but, the problem is that human accuracy is a very poor benchmark for autonomous systems. Autonomous systems need to be held to a higher bar, and it's better if that accountability and expectation is held from the beginning.
It could be safer than human drivers and still cause quite a few deaths. We can't evaluate such statistics from one accident. We don't even know if this particular accident would have been avoided by a typical human driver (there was apparently one behind the wheel of the vehicle).
well, that’s not really the right question; given enough time, they’re always going to be safer than human drivers
the question is at what point we decide that it’s “good enough” to be a life saving compromise
humans are really terrible drivers compared to sensors. the number of inputs, precision, and quick decision making are all much better for driving than humans
reply