Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Analysis shows Tesla FSD makes critical driving errors every 8 minutes (dawnproject.com) similar stories update story
2 points by d3mon | karma 92 | avg karma 3.68 2022-01-18 07:05:38 | hide | past | favorite | 25 comments



view as:

I like how the article casually skips over the fact humans do a lot of the same stuff they're pointing out as failures.

Seen plenty of people turning when they shouldn't, using the shoulder to get to a stop sign to turn, going down one way roads the wrong way, etc.

It doesn't need to be perfect. It only needs to be better than humans as an average.


You're right. It doesn't need to be perfect. It only needs to be better than humans on average. (How much better it needs to be is debatable, people react differently to accidents caused by machines than accidents caused by humans, but that's a different discussion.)

So this line in the report should probably concern you:

> FSD Beta v10 committed one likely collision every 36:25, or 475 per year. This is 8,506 times higher than the average accident rate for human drivers used by the auto insurance industry, which is one accident every 17.9 years.

I don't know if it's fair to compare what they consider a "likely collision" to what the insurance industry considers an "accident". Maybe the analysis is bogus on that grounds. But your statement isn't an argument, since the analysis itself doesn't expect self driving to be perfect.


17.9 years seems like a long time. Is this only major two-vehicle collisions?

One year during a bad ice storm and stuck with a rental with no 4WD, I went into the ditch twice in one weekend. That's like half my lifetime allowance before I'm considered a bad driver, I guess.


Well since this is what the insurance industry uses, presumably "accidents" causing no damage (or minor damage not turned in to insurance) don't count.

> It doesn't need to be perfect. It only needs to be better than humans as an average.

Sadly I don't think this will be true. The problem is that if a person causes an accident only that driver is at fault. If a self driving car causes an accident it follows that all similar cars would have behaved the same, so they are all considered to be at fault. That will only be mitigated if they drive better than humans.


The problem is that people aren't interested in saving lives, reducing the number of accidents, or improving safety of accidents.

They're interested in feeling comfortable with automated driving on the street next to them.

Statistical improvements over human averages in deaths, injuries, and frequency of accidents won't sway people generally. People aren't interested in rational reality, they go with their gut and their understanding of narratives.

Tesla needs to release thousands of boring videos demonstrating cars driving well to combat the saturation of error compilations. They need to establish the border between "scary robot driver"and "not perfect, but better than humans, preventing X deaths, Y severe injuries, and Z accidents overall for every million miles driven" and then create a McDonald's style "N lives/limbs/dollars saved by driving assist and FSD."

Advertising gallons of fuel saved per Tesla miles driven and improvement to air quality would help as well.

People keep moving the goalposts because their gut and the information they perceive tells them driving software is dangerous enough that humans are preferable.

They see the media pumped by a system that follows the "if it bleeds, it leads" principle. People will watch criticisms and accident compilations at a ratio of hundreds or even thousands to 1. The only way to combat that narrative is by communicating a better one.

They should provide sufficient information, honestly, so that arguing FSD or driving assist is less safe than humans driving is like arguing that McDonald's doesn't actually sell that many burgers.

Otherwise it's the sensational and negative coverage shaping public perception and discourse.

RTFS, or "Read The Fucking Statistics" should be the final say in these criticisms of ai driving systems. Otherwise actual improvements saving actual lives and limbs will be held back because regulators will pander to the politically safe narrative and obstruct progress in the name of safety, because people are stupid - they'd rather feel comfortable than be safe.

This all, of course, depends on the numbers being in Tesla's favor, but I believe they are: here's the q4 2021 accident report:

>>In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.59 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.

That's better than average human performance. On highways, at speeds greater than 50mph, a reduction if the number of accidents by 8 or of 9 would mean a huge impact. 20k people died in 2021 on the highways - up to 17000 lives might have been saved by autopilot or driving systems. Reducing the number of crashes and accidents by 8 out of 9 is phenomenal.


According to my analysis of a similar youtube corpus, dashcam owners end up in accidents once every 30 seconds, so this is a vast improvement. Not to mention the meteor strikes.

Uh, but dashcam videos you see on YouTube are there specifically because something interesting happened in the video. From my reading of the article, it's not like these analysts or whatever just watched a bunch of "Tesla self driving fails compilation" videos. Maybe their corpus isn't entirely representative of self-driving, I can't tell from the article, but I don't see what you're basing your conclusions on.

Maybe I'm missing something, I didn't read the article thoroughly and I'm not familiar with the work. If I am missing something, please enlighten me.

EDIT: I took a look at the report, and, well:

21 YouTube videos, totaling over 7 hours of drive time, from customers test driving Tesla’s FSD Beta program were analyzed for driving quality and safety. The videos analyzed in this study included FSD Beta major versions v8 (released December 2020) and v10 (released September 2021). Videos with significantly positive or negative titles were avoided in an attempt to reduce bias. There was an effort to analyze videos from as many unique YouTube channels as possible. As a result, the analyzed videos contain a variety of driving conditions, with varying times of day, weather conditions, traffic patterns, and locations.

I... don't think your "corpus" of dashcam videos is similar to the corpus used by this analysis.


At best it's a valid opening gambit in a discussion of the merits of driving automation. It's not a scientific review, it's one guy's quick filtering of hyperbolic titles plus whatever bias is imposed by his personal YouTube bubble.

Dashcam videos have the same self selection bias. People don't upload boring successes at the same rate they upload exciting failures. The ratio probably exceeds thousands to one. For every hour of Tesla autopilot video showing a failure, there are probably thousands or tens of thousands of hours of boring, safe, uneventful driving that isn't recorded, uploaded, or even noticed.

Getting the statistics from regulators and public reports paints a picture starkly different than what media portrays, because it's boring and doesn't result in clicks.

"Yay, nothing bad happened" is the ultimate goal, but along the way "less bad stuff happened" is the story to pay attention to.


This is a stunt by Green Hills Software, a maker of embedded operating systems and programming tools that supplies directly to Tesla's competitors (and stands to lose massively as Tesla grows). You might be interested to know that they initiated a public relations campaign decrying the use of Linux as insecure back in 2004:

https://lwn.net/Articles/83242/

Here's an article by the very same Dan O’Dowd: "Linux: unfit for national security?"

https://web.archive.org/web/20040912190752/http://www.ghs.co...

https://web.archive.org/web/20040916074333/https://www.ghs.c...

https://en.wikipedia.org/wiki/Green_Hills_Software


So he has a motive, but how is he wrong? By any measure the low quality of FSD's performance is seriously dangerous. That regulators allow "beta" software like this to be used by end users on public roads is scandalous.

>>In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.59 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.

The ratio of accidents per mile driven is 8 human accidents per software accident. It doesn't account for reduced injury and damage in accidents that do happen, or other indirect benefits (or costs. )

Your assertion that the use of driving automation is scandalous is just wrong.


FSD is different. It's obviously having a would-be accident every few miles from watching Youtube videos (by the biggest fans). And those statistics are ridiculous advertising material comparing the extremely easy fair-weather driving on highways where drivers felt safe enough to use autopilot vs all miles.

Electricity is different. It's obviously having a would-be electrocution every few homes from reading too many novels (by the biggest fans).

https://www.reddit.com/r/pics/comments/8qcy9y/antielectricit...


I can think FSD is inevitable while also thinking Tesla's current version is garbage and shouldn't allowed for the public, much less sold for $12k.

This is like defending an early electric company that is deploying lines all over the place and risks shocking people while not working at all.


New things are inherently riskier than old things. How do you expect to see progress without experimentation and short feedback cycles?

There are ways to make new things safely without selling a technology that absolutely doesn't work to the public. What Waymo and Cruise are doing for example in this case

There are ways to make new things safely without selling a technology that absolutely doesn't work to the public.

Waymo and Cruise are bad examples because they are both far, far behind Tesla...


Are you claiming that your personal analysis of a biased selection of YouTube videos is more accurate than the statistics provided to and validated by regulators under weight of law?

The author of the article is also a known bad faith operator. Isn't it more reasonable to base your understanding of driving automation (by Tesla and other companies) on actual numbers and not a laughably biased hit piece by a nonsense person?


Yes in this case my perusing youtube videos is more accurate than the statistics Tesla is touting. Almost every trip I see has them making bad mistakes.

I own two older Teslas, so I don't have FSD, only the previous gen autopilot. I am still a big fan of Tesla in general and hope they continue to push forward with FSD, but I will point out one very big hole in the statistics quoted above.

The 1 in 4.31m statistic counts only actual crashes. To get a better sense of the actual risk, I believe you need to consider how many incidents would have resulted in a crash if the driver did not take over and correct for the FSD's mistake.


Are those numbers directly comparable? Autosteer, etc. are only used in clear weather conditions with good lane markings. Obviously automated systems will work better in those conditions, but so do humans. But humans drive in all conditions.

I think it's strange how realized accidents (i.e. that actually happened) are not factored in at all when people discuss FSD beta. Though FSD beta has been out for over a year I can't find much about for example people dying as a result of FSD beta.

For the most part, FSD beta is limited to the low speed roads (sub 50mph), so there really should be fewer possibilities for actual death (especially considering how safe Tesla's are in crashes in the first place). I suspect if there are accidents it'd be mostly fender-benders. Not sure dying should be the bar here, more like any accidents in general.

Legal | privacy