Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

But we already let learner drivers try things out on the road, and their supervisors can’t even take control. Human drivers crash and make mistakes too… where is the uproar for allowing them to drive?

Where as Tesla only allows fully licensed drivers and until recently only pretty good ones at that.

I think you’re just regurgitating the standard anti-Tesla talking points… and avoided the questions on the fact Mercedes hasn’t tested this in actual streets and has just rolled it out and while it’s admitting liability isn’t that just as bad as what Tesla is doing? Worse in fact as there no bar for getting this feature, and it doesn’t get shut off for misuse (which FSD beta does).

Given all the approaches (with a hope of working) are using AI - how do you get the data to train the AI? This isn’t a Tesla problem, this is an AI training problem. Tesla already makes heavy use of synthetic data, and using real world human drivers to collect training material. How would you do it?



sort by: page size:

So what you mean is that Tesla should be using trained safety drivers like real AV companies do? Also, to be clear, you are either lying or misinformed about the behavior if autopilot as far as detecting engaged driving goes.

No but they actually should. And every company either already has or is near an equivalent, so it makes sense to train drivers on what will soon be ubiquitous car software. Remember the goal here is to make driving safer. And there’s no evidence Teslas System isn’t doing that. People crash and die without autopilot every single day.

Yeah what about? There is no excuse for human drivers. But that does not mean using Tesla's beta on the road is OK.

No way. Even in beta, I’d much rather put up with roads full of autopilot Teslas than a road full of human drivers. Have you seen how people drive?

That actually is a part of v9 autopilot but Tesla is never going to put that in writing.

The fixing of bugs like this without admission of guilt isn’t unique to Tesla. Apple, Google, Samsung, and many others do the same.

The reason they haven’t been buried in a lawsuit is because these are driver assistance features, not self driving features. It’s the responsibility of the driver to keep control of their car at all times.

Arguing that “people think they can drive with their hands off the wheel” isn’t a real legal argument. This is why Tesla warns drivers in documentation that they need to keep control of the car at all times.

This is the same legal argument that has caused warnings in microwave manuals about not putting animals inside them to dry them out. Just because someone misinterprets the capabilities of technology they purchase doesn’t mean the technology isn’t doing what it’s designed to do.

All self driving cars and driver assistance tools will be imperfect. The question is whether or not they are statistically safer than the average human. If you are an above average driver (by your record, not your impression of yourself), these tools may be unnecessary or may actually be worse for you. It’s the average and below average people that they do wonders for.

Please stop using one or two data points to form an opinion about something that has millions and millions of data points.


The problem is you can NOT be kind of driving. The Uber video of the driver is so telling. You can see he is reading his phone when the car hit the lady.

Google has indicated this for years. There really should NOT be safety drivers behind the wheel.

What Tesla is doing is almost criminal. Having a autopilot button when the car really can not do auto pilot is about as wrong as it gets.

I do not believe you can do SDC directly to the consumer while it is really in beta.


I've seen enough fails from Tesla's Autopilot and FSD Beta (typing those names out feels so laughable given how far away they are to describing the things they're meant to, other than Beta) that I actively avoid any Tesla I see on the road. I never know if the car is being driven by someone or one of these tools are engaged. I almost feel there should be regulation that communicates to other drivers that a human is not controlling the vehicle.

It really bothers me that we have been opt-in without choice into Tesla's marketing and hype bubble and live on the streets research and development. These are people's lives. The road isn't a place to move fast and break things.

If one gets hit by a Tesla car under one of these "automated" systems, what prevents a person from holding Tesla responsible?


I've been in a Tesla running the self-driving beta in Manhattan and it is NOT fun or relaxing at all. The car is overly "jumpy", but by jumpy I mean "breaky." The way a lot of pedestrians cross streets (walk into the side of the road and get 2 feet away from a passing car before crossing right behind the car) causes it to slam on the break constantly while people "stage" their crossing. It's also terrible at left turns without a light. Both of these flaws are completely understandable because it's a complex safety situation, but people who think human drivers are replaceable within 5 years are naive.

Nobody is doing that. In the locales where truly autonomous vehicles are being tested, it’s happening under specific legislation and regulations put in place by the states. For Tesla, behind all their bluster, the terms and conditions you accept to use their “self-driving” features make it clear that the human driver is always responsible for safe operation of the car, and all of these features are just to assist the human driver.

> "Drivers can activate Mercedes's technology. called Drive Pilot, when certain conditions are met, including in heavy traffic jams, during the daytime, on specific California and Nevada freeways, and when the car is traveling less than 40 mph. Drivers can focus on other activities until the vehicle alerts them to resume control. The technology does not work on roads that haven't been pre-approved by Mercedes, including on freeways in other states."

It's remarkable how often these significant limitations are ignored.

The difference between SAE Level 3 versus Level 2 is liability, not functionality. Conceptually, it would be relatively simple to create a Level 3 system that only worked in parking lots and never drove over 5 MPH. And yes, such a system would be "Level 3," which naively sounds better than "Level 2" because the number is higher.

But you could compare such a system that works only in parking lots against Tesla's Supervised FSD Level 2 system which controls the vehicle at prevailing speeds on all city streets and highways, executes left and right turns, stops at traffic controls, executes U-turns, parks, and everything else. Tesla doesn't want to assume liability yet because they are still iterating fast. The last few builds have been released about two weeks apart.

The functional domain of a Level 2 system can be significantly greater than a Level 3 system. And that is the case when comparing Tesla Supervised FSD versus Mercedes Drivepilot. It remains commendable for Mercedes to take on liability, but we should not kid ourselves. Doing so is a play for cheap media wins and punchy-sounding milestones like what we see in this article. It's not actually moving autonomy forward in any meaningful way when compared alongside Waymo and Tesla FSD. The scope of the Mercedes system is simply too narrow.


I agree. Enhanced autopilot and FSD are big mistakes and it’s a shame they are allowed. Judging by articles and by comments I read on forums, though, autopilot is a significant attraction to many owners (I own a model Y and I drive in LA with a kid in the back, and I’ve never used autopilot and have no interest). And tech blogs will give a Tesla a poor review entirely because the autopilot is wanting.

If such enhanced autopilot systems are inevitable, what is the solution here? I’d assume more intense monitoring and nagging of the drivers.


I've also filed a NHTSA report, some 12 months old now, that has been greeted with ... crickets. Something very basic, that any car maker should be able to do, is write a manual that teaches people how to use the (poorly written) software.

The manual explains that you can do a 3-step invocation of a voice command: 1) push 'voice' button; 2) utter command; 3) push 'voice' button to complete. It hasn't worked that way since 2019. See, e.g. Software version: 2021.32 [1], p. 161

Good luck teaching people how to use 'Full Self Driving' with these kinds of error checks.

[1] https://www.tesla.com/sites/default/files/model_s_owners_man...


CA DMV's position is basically "let's try it with human backup for 3 years, then re-evaluate". That's reasonable enough.

Tesla may have set the field back. They released their "autopilot" as a beta, didn't provide sensors to enforce driver hands-on-wheel, shipped a system that works well only on freeways but will engage on other roads, and claimed that if it crashes, it's the driver's fault. Everybody else shipping similar capabilities (BMW, Mercedes, Cadillac, etc.) has put more restrictions on them (such as a hands-on-wheel sensor) to avoid over-reliance on the automation. Now Tesla has had to remove some features from their "autopilot".[1]

Google may be able to get approval for their 25MPH max speed mini-car operating without driver hands-on-wheel, on the basis of the slow speed. Google's system is much more advanced, and has a detailed model of what everything else moving around it is doing.

Expecting the human driver to take over if the automation fails will not work. The limits of human monitoring are well known from aircraft automation. It takes seconds to tens of seconds for a pilot to grasp the situation and recover properly if the autopilot disconnects in an upset situation. That's for pilots, who are selected and trained much better than drivers, and who go through elaborate simulator training in which failures are simulated. Watch "Children of the Magenta"[2], where a chief pilot at American Airlines talks to his pilots about this.

[1] http://www.bizjournals.com/sanjose/news/2015/12/16/tesla-to-... [2] https://www.youtube.com/watch?v=pN41LvuSz10


The whole premise of "almost working" autopilot is the logical thing to do from Tesla's business perspective but deeply flawed from a safety one.

The allure of having a car that drives itself is to relax the mind from paying attention to the road, handling 99% of the situations correctly through the software. But the 1% that require human input are made much more dangerous because the driver not only needs to handle the situation, they need to instantly catch up and familiarize themselves with the situation they've found themselves in. What was the car trying to do? Why is it suddenly stopping? What do my blind spots look like? Is there anybody behind me? Was anybody in the process of passing me?

Unfortunately the time it takes to refamiliarize with the situation could exceed the total time budget to respond.

It's similar to not paying attention to the road and being in the passenger seat when a junior driver is driving instead. But there are 2 exceptions:

- Most people increase the attention they pay to the road when a learner is behind the wheel. Preparing to respond and offer guidance to correct.

- Self-driving software is a lot more indeterministic than a new driver. And software updates can easily change the behavior in non trivial ways. As if the driving student had Multiple Personality Disorder, one day being over cautious in certain situation and other days over-confident. Though I contend that MPD is actually more predictable than an opaque algorithm, as there are other human clues about the behavior mode of the moment that could help guide the response of the trainer.


I don’t think many people who tried full self driving believe the “better than a human driver” thing. The ridiculous idea seems to be mostly in fan circles, not among Tesla owners.

Tesla themselves say that the tech is meant to assist the driver, and not be FSD in the true sense of the words. And even with assistance, it only makes driving somewhat safer, and somewhat more dangerous in other areas.

I wouldn't want to use FSD in the city with my kids in the car, let's say it this way. It's an amazing technology to my SWE brain. And from a business perspective, it disrupted the automotive industry. We should appreciate it for what it is, not try to make it what it is not. And full self driving it is not. Reasonable self driving, maybe it is. It would be more accurate to call it full driving assist.


Tesla is recklessly using its own customers to train their fleet for level 5. AT BEST their response is, drivers need to be 100% as attentive as they would without autopilot. What then is the point of autopilot, except to train the fleet?

They are in complete denial that autosteering can seduce reasonable people into lowering their guard and mentally relaxing. Again, what other immediate user benefit would it provide? What would a reasonable person expect autopilot to do?

Isn’t it contradictory to offer an automation feature while claiming that legally it does ZERO automation for you?

Note: collision avoidance and emergency braking are on by default even without autopilot. “Autopilot” here really means the autosteering feature.


it is a bit scary to think we'd have to lobby government to allow any new invention to prove "safety". Tesla has indicated why it needs to enable these features in order to collect data to improve them, it has also shown that driving under autopilot are already reducing accidents (compared to without and national stats for all vehicles), and yet you call for a entire ban instead of thinking constructively. with the vision stack there will be improvements to the driver attentiveness checks. that would seem to mitigate abuse of the features, which is clearly meant to be supervised at all times by the operator.

You're missing the point. The current Tesla "autopilot" is not ready, and is not intended, for full time driving. Not yet. However, it might appear to work well, and some Tesla drivers will forget that it's not ready. That's the problem with half-finished "autopilots".

I think there's some merit to it until the industry (or government) determines some way of establishing whether the driver is actually paying attention. Think of it like a preemptive seatbelt - Brain Computer interface maybe? Sounds like an actual use for Machine Learning (I know nothing about ML beyond toy Neural Networks, so maybe not)

It's all well and good saying the driver should pay more attention - almost treating it like a normal tech product, I suppose, but most "tech" products don't control almost a Megajoule (mental maths might be wrong but you get the idea) of kinetic energy and the lives of the occupants of both the car and any others it hits.

I'm not saying that it's an impossibility to get it right, just that it needs to be held to even higher standards than the existing engineering present in the car (and planes if I'm being honest), not Sillicon Valley. Luckily this doesn't seem to be too much the case, but it could happen - the few people I know who own Tesla's are all non-car people ("What's suspension?"), i.e. The gadget-factor of the car is definitely attractive. Tesla do seem to be doing a pretty good job, so hat's off to them so far.

next

Legal | privacy