Tesla keeps saying this, but as far as i know, they can't actually detect that (because it's a torque sensor and can only whether they are moving the wheel).
Does anyone have different info?
You can even find plenty of reports that even light touching of the wheel causing it to issue warnings that hands are not on the wheel.
They never have let one of the lawsuits get past discovery (AFAIK) before settling, and this would almost certainly come out there if true.
Sounds to me like the Tesla tried to keep going after it hit you. Those cars really should not be allowed to use that tech and they've gone well beyond having enough proof to admit that.
I’ve experienced this in a Tesla. It slammed the brakes on without warning in the middle of a freeway and dropped our speed from 70 to 30 in seconds. Luckily there wasn’t a truck behind us otherwise we would have all died. Tesla fans tend to waive stories like that away with “yeah it happens sometimes” which I think is a pretty astonishing response to what really felt like attempted murder by a car.
Now compare this with Tesla's PR statement which makes it sound as though they have proof that the guy who was fatally steered into a barrier by their Autopilot had taken his hands off the wheel. They resort to this kind of sleazy, dishonest misrepresentation every time their software screws up.
The lesson here is that Tesla does not take complaints seriously and someone died as a result. It sounds just as likely to me that another person using autopilot would have had the same issue at that location.
The steering knuckle failure shown in the article is insane. I've never heard of such a thing happening and have never heard of anybody who has heard of such a thing happening on a vehicle from what I will risk calling a "real manufacturer". I am always flabbergasted that there's people willing to just be in a Tesla while they're moving, let alone buy one, but then again, the general public has no real idea of how automotive safety really works. Crazy stuff.
> Another theory I've heard is that the driver was holding down the accelerator to prevent phantom braking.
Would be interesting to know how commonly this workaround is applied by Tesla owners. If this is common enough it seems like a case where a feature that's merely unreliable becomes a safety issue due to second-order effects. Echoes of Therac 25[1].
How much can we trust this data, provided by Tesla? Maybe the sensors that tell if the hands were on the wheel malfunctioned? Maybe he tried to restle the car into control? Would be helpful to have a video recording of the driver instead.
The idea that the driver didn't know he was supposed to have his hands on the wheel while using autopilot seems ridiculous. Anyone who has used autopilot knows that failure to keep the hands on the wheel results in a warning.
We do know he received warnings. And we can strongly infer that he knew how to actively avoid receiving warnings. Therefore I can't possibly fathom how Tesla could be responsible for a "failure to warn".
> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.
This would be hilarious if it wasn't so tragic. So did Tesla's autopilot!
Yes but the display freezing and then him putting it in reverse isn't what he described happening.
There's a big difference, and it's obvious from watching the video once which one happened, even to someone who didn't know how to put a Tesla in reverse (me).
When someone says something incorrect, I always wonder if it's accidental or intentional. This kind of misrepresentation seems intentional, making me question the source's future credibility.
I hate Teslas, their complexity is a design flaw. Can't believe I'm defending them.
While I love the trailblazing Tesla has done for EVs and smart cars, it's moments like these that make me cringe.
Obviously, we don't know right now if the driver was at fault or not - but I'm a capable coder and have some exposure to control theory as well as microcontrollers - and would be more than happy to audit Tesla's firmware.
Even more, I think there's a case to be made here for formal verification and high assurance languages such as SPARK.
It would be nice if you could pass the Tesla codebase to a formal solver and ask the solver "prove that pressing the brake never accelerates", and then have the resulting code and proof be made public.
Drawing a conclusion about the investigation before it's finished seems like pretty obvious spin.
> TESLA: "The only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, ..."
Nobody knows what happened. Huang may have had his hands on the wheel, which the system failed to register, and the car made a turn that he wasn't able to correct in time without hitting other vehicles.
Tesla is a fucking computer. The dude that took his hands off is a human. One of those has actual intelligence. You don't get to blame user error on the computer because someone died.
Sounds plausible I Guess but I wonder how Tesla in this crash description
circumvented Sir Isaac Newton’s laws of motion and the bodily damage the driver should have incurred on negative G's.
I can really see why Tesla would try to deny this so hard, seeing it happen yourself is frightening.
reply