So that's a good hunch. Where is the data showing that Tesla software is causing more crashes than say people driving base? Without data, why are you thinking that something should be not allowed on the roads? Opinions without data are just that.. opinions. Would be a pretty terrible way to regulate.
But HOW do we know is safer? We should not assume is safer by default and I see many problems with Tesla fanboys numbers
1 the count for the times drivers intervene to prevent a crash are secret, or are not counted at a crash.
2 I don't want on my village/city some american car that is safer then the american driver that included 16 year old drunk teens and people that don't even have to pass exams and medical tests. The self driving car should be compared with healthy drivers , I don't care that some state/country is incapable of stopping speeding or drunk driving.
3 updating this cars should invalidate previous approvals, as a software developer I know well how updates include bugs not only features.
Anyway I am against companies deciding from themselves how is a capable alphya testers and how they should run alpha tests.
If we want real metrics then incidents liek the ones in the video should be counted against Tesla, is MALICIOUS to show a stats of average driver using an average car versus an expensive car with a combo of human+computer and assign the good to the computer and ignore the many times the human had to save the computer ass. Fuck shit statistics and shit companies and fanboys using them.
At best you can prove the computer might be a decent copilot versus no copilot.
If any autonomous driving system could demonstrate safety at or above human drivers supported by a robust dataset generated from a rigorous investigative process and made available to credible unbiased third party watchdogs or regulatory agencies that confirm the findings then I would support the further testing and deployment of such a system.
Now your turn, how many people has Tesla ADAS software killed and what is the acceptable rate?
As to the data reporting I demand, which is longstanding standard in safety-critical system deployments, Tesla is literally the furthest from that. They deny all third party access, voluntarily publish no raw data, demand the maximum redaction in all mandatory government reporting, and hide data except when selectively releasing private data out of context in press conferences only when it portrays them positively and burying the rest. They choose to do no investigations so they will not be required to report negative outcomes, they sue reporters, threaten news organizations and employers with lawsuits to make them silence their employees, and
direct abuse against regulators to force them to recuse themselves from investigations. Tesla uses every trick in the book to prevent a critical look at their numbers.
There is exactly zero reason to believe even a completely transparent company about the safety of their own products. A company as dishonest and cagey as Tesla providing numbers with zero supporting evidence, who has a history of cherry picking misleading good-sounding data, should have their numbers resoundingly ignored.
There is always the argument of Tesla being safer than drunk, intoxicated drivers.
First, I would not be too sure about that, but I would rather the governments implement a DUI check as well as some sort of alertness test before an engine can be started than trust the current fsd software.
Put speed limiters as per the given location and the DUI check in all cars , then let's compare the numbers again.
And how can a Tesla save any driver when any driver is supposed to be in full control of the vehicle at any given time?
If a driver needs Tesla as a saviour, then he has no business being behind the steering wheel.
Wait what? Tesla should be the one providing the data showing that it is safe for their software to be on the road, not the other way round. Vehicles have always need to be regulated to be on the road, rather than retroactive ban after they kill a bunch of people.
If Tesla can prove riding in their autonomous cars is an order of magnitude safer than human operated cars then it would be unethical to not allow them to switch on full autonomy.
I hope Tesla pays lots in damages and then keeps improving their systems. If they’re truly below the rate of human error when segmented by scenario there’s no reason for a ban.
I also wish they’d knock it off with purposefully misleading marketing. They have a decent product but for whatever reason their CEO feels like compulsively lying about it.
The NHTSA faulted Tesla for the last fatal crash and its likely they will get faulted for the current. I am just shocked that after this one that they haven't suspended its use. The level of chutzpah this company operates at is very dangerous to its future but the public as well.
The production issues are a whole different can of worms but the autonomous driving accidents are what the public sees.
People need to understand that Tesla is not the only company that can deliver self driving hardware nor are they the only manufacturer of EVs. They are currently the maker of the more desired EV but with some of the old school automakers joining the fray they will see their high end model profit vanish.
finally, I am still not convinced they ever plan to actually sell a base model.
downvote edit
Please see an owner recreation using the same version AP with the same condition zhttps://youtu.be/6QCF8tVqM3I
What is the rational thought it permitting that on the road.
Depends on what's in front of the car. If it's a large, broad, stationary object like a firetruck or tractor trailer, then the sensors on the car aren't able to see those obstacles. Teslas have been known to frequently run into such things at top speed, completely blind to the fact they were there at all. It's so bad multiple people have been literally decapitated at this point, and Tesla has to date refused to fix the flaw (inadequate sensing) that caused the decapitations to occur.
People say Teslas are safer than humans, but humans who can't see large broad stationary objects would never be granted a license. If you are a robot driver, no problem though apparently. In fact, if you have proven to be recklessly homicidal while driving, your license is taken away and you are likely put on trial. Not if you're a robot! They will clone your flawed sensors and robot brain, and ship it to thousands of people to be deployed on roads across the world.
When your web service goes down you probably find the root cause, fix it, and issue a report about how it's not going to happen again due to your fix. Not Tesla! They are aware of the problem, know exactly how to fix it, and it seems like one man's ego is getting in the way of shipping actually safe autonomous cars. Tesla cars are not safe because Tesla as a company does not value safety as a priority. They value design, whizbang technologies, branding and image, while safety is a distant concern. Elon Musk put more effort into naming the line of cars SEXY than he did to actually producing reliably and predictably safe autonomous vehicles. Musk spent far more time lamenting over the fact he was forced to name his car the Model 3 than the fact it decapitated a human being due to his poor engineering choices.
Teslas on AP are death machines (read: they are machines who have killed people) and we in the public are the unwitting beta testers for Tesla Inc.
For those who don't have the time, the video this article is summarizing (https://youtu.be/9KR2N_Q8ep8) shows a very cool approach to safety. Tesla takes real-life crash data from their cars, charts the various ways in which collisions take place, tunes crash responses (like airbag deployment and seatbelt restraint) for each situation based on simulations, and produces a more optimally safe car as a result. They also are able to validate safety of passengers virtually using this method. This video includes a crash test they did of a non-standard crash that isn't covered by regulatory bodies but isn't uncommon in real life driving, and they use it to prove that their simulation approach works.
My only concern is that Tesla builds all this on a vast body of data they collect from their drivers. This feels like a privacy problem, and I wish they would make this type of program opt-in.
Tesla. I'm disappointed. You say your cars are the safest on the road. That your self-driving AI is safer than humans. You spar with regulating bodies along those lines-- arguing that you're protecting consumers. And then you roll out a fluff feature that causes accidents.
I find the fact that the self driving feature results in much higher fatality rates to be a reason to strongly avoid the car. It’s high safety ratings and driver aids were a big draw, but the fact that more people die in Tesla’s than similar luxury vehicles is a huge turn off to me and not at all what I would expect.
so while its possible, Tesla is making changes that bring accident rates down. Everyone here is acting like its certain with absolutely no scientific evidence. Clearly by significantly more than they go up. Its stupid to come down on them like they're murdering people. That IS how you stop all progress.
Furthermore, no one is forcing anyone to buy Teslas. People should be free to do as they choose. I have no doubt, for example, that a corvette, lambo, m3, or convertible all increase certain risks for their drivers by far more than autopilot - which probably decreases their risks.
If Tesla are not careful with this, drivers of other vehicles will have serious reservations being anywhere around a Tesla. I have to say, I already do.
I will not stay behind or next to a Tesla if I can avoid it. I'll avoid being in front of one if the distance is such that I cannot react if the thing decides to suddenly accelerate or, while stopping, not break enough or at all.
In other words, I have no interest in risking my life and that of my family based on decisions made by both Tesla drivers (engaging drive-assist while not paying attention, sleeping, etc.) or Tesla engineering.
Will this sentiment change? Over time. Sure. If we do the right things. My gut feeling is program similar to crash testing safety will need to be instituted at some point.
A qualified government agency needs to come-up with a serious "torture" test for self-driving cars. Cars must pass a range of required scenario response requirements. Cars will need to be graded based on the result of running the test suite. And, of course, the test suite needs to include an evaluation of scenario response under various failure modes (sensor damage, impairment, disablement and computing system issues).
I am not for greatly expanded government regulation over everything in our lives. However, something like this would, in my opinion, more than justify it. This isn't much different from aircraft and aircraft system certification or medical device testing and licensing.
I can guarantee you that if Tesla's auto-driving starts killing people, there will be near-instant regulation. For that reason alone, supporters of auto-driving technology should be aghast at what Tesla is doing.
Even if data showed 0 accidents, I just cannot trust camera's and software with my and others lives.
reply