Incorrect. Autopilot offers by far the most important training data: driver overrides. That is the dangerous part of the learning and Tesla makes its customers do it.
Tesla’s autopilot is very comparable to autopilot in aircraft (read a bit before downvoting). It is something you engage to take the load off the vehicle operator but does not eliminate the need for the operator.
Autopilot (both in aircraft and in the Tesla) is really good at what it does. But if you start thinking about it as anything other than something which helps reduce the vehicle operators workload, it is dangerous.
Pilots understand this and get pretty extensive training. Some Tesla owners clearly do not. Some of the blame lays with Tesla due to marketing. Perhaps these hybrid machine/ human driving systems should require training.
So what you mean is that Tesla should be using trained safety drivers like real AV companies do? Also, to be clear, you are either lying or misinformed about the behavior if autopilot as far as detecting engaged driving goes.
The whole premise of "almost working" autopilot is the logical thing to do from Tesla's business perspective but deeply flawed from a safety one.
The allure of having a car that drives itself is to relax the mind from paying attention to the road, handling 99% of the situations correctly through the software. But the 1% that require human input are made much more dangerous because the driver not only needs to handle the situation, they need to instantly catch up and familiarize themselves with the situation they've found themselves in. What was the car trying to do? Why is it suddenly stopping? What do my blind spots look like? Is there anybody behind me? Was anybody in the process of passing me?
Unfortunately the time it takes to refamiliarize with the situation could exceed the total time budget to respond.
It's similar to not paying attention to the road and being in the passenger seat when a junior driver is driving instead. But there are 2 exceptions:
- Most people increase the attention they pay to the road when a learner is behind the wheel. Preparing to respond and offer guidance to correct.
- Self-driving software is a lot more indeterministic than a new driver. And software updates can easily change the behavior in non trivial ways. As if the driving student had Multiple Personality Disorder, one day being over cautious in certain situation and other days over-confident. Though I contend that MPD is actually more predictable than an opaque algorithm, as there are other human clues about the behavior mode of the moment that could help guide the response of the trainer.
Using Tesla's "autopilot" isnt very different than working as an unpaid driving instructor to keep an eye on the "autopilot" trainee. To make things worse, the trainee is tripping on mushrooms and sees nonexistent cars appearing from nowhere, trees morphing into pedestrians and pedestrians morphing into traffic lights. The trainee also has a problem with epilepsy and youre expected to take control on a second notice. Edit: also, the trainee has cognitive ability of a mentally challenged frog.
Actually Tesla has access to two sources of large-volume real-world data:
1. When the driver has full control of the vehicle
2. When the 'autopilot' is engaged and the driver is ready to intervene if necessary
So if the AI passes the safety test on Type 1 data, Tesla can promote it to being tested on Type 2. And if it passes that safety test it can be promoted to full autonomous control.
The 'autopilot' mode effectively does for Tesla what Google's test drivers do, but for free and on a much larger scale. Seems to me Tesla have a very strong hand here.
How much training does Tesla require you to complete before using FSD? I don’t think there’d be much criticism of autopilot if users were trained like pilots.
No but they actually should. And every company either already has or is near an equivalent, so it makes sense to train drivers on what will soon be ubiquitous car software. Remember the goal here is to make driving safer. And there’s no evidence Teslas
System isn’t doing that. People crash and die without autopilot every single day.
At least Tesla has demonstrated that their data collection is for training their full self-driving capability. That represents tremendous end-user value, in my opinion.
Musk has claimed that. Whether it's doing any good remains to be seen. Large amounts of data from normal driving are probably good for training lane keeping, but not that useful for handling emergency situations. Teslas are still running into crossing semitrailers after three years of "data collection".
Teslas upload about a gigabyte a month.[1] Here's some trip logging data, from 2017.[2] Trips are logged in straightforward XML. Other stuff, not clear.
Tesla’s Autopilot or FSD has all the anxiety-inducing aspects of supervising a teenager driving your car. You’re constantly worried it’s going to do something wrong and you have to take over. It’s FAR more stressful than just doing it yourself.
But hats off to Elon for creating a mechanism where people pay money just so they can provide human input to train a machine learning model. And it’s on such a large scale too, and been going on for several years.
That’s the true genius behind Autopilot. Nothing to do with driving or cars even.
Tesla isn't hiding data, they simply only have autopilot data on highways, which their autopilot data suggests it's safer than human pilots on highways. Which to me suggests we should be putting social effort behind making autopilot (on the highway, at least) the norm.
I don't own a Tesla, but my understanding is that autopilot is basically marketing speak for what everyone else calls lane assist and smart cruise control, while FSD is when the car is supposed to be able to drive for you. In other words this report is saying human drivers + automated safety features are safer than human drivers alone. In conditions where the Tesla will let someone enable autopilot in the first place.
They claim autopilot uses "cameras, radar, ultrasonic sensors and data."
I think even just the cameras alone would be valuable. It seems a Tesla has a better idea of what's going on around it than a human would, so these sensors should be at least close to sufficient for training an autonomous driving agent.
Which is why everyone buying a tesla with autopilot is required to complete hundreds of hours of training on it that help them understand these limitations and complications, and how to understand the opaque system.
Oh wait, no, drivers get half an hour to demonstrate they won't immediately kill someone when driving, and routinely don't even know how their car's mirror dimmer works.
So maybe these contexts should stop being compared, because it's dishonest to do so.
You've got it backwards. Supervising an ML model in a lab under perfect conditions is testing the times tables. Putting you in extremely variable situations where you have to solve different problems is learning calculus. No two driving trips are exactly the same, otherwise you could just record the steering and pedal inputs and be done with the whole thing. Autopilot has been in more situations and processed more roads than any other self driving vehicle (simply because Autopilot vehicles have been mass produced for years and the others aren't yet).
> the vast majority of data Tesla is receiving is reinforcement of what already works.
Absolutely! But is this not still valuable? This tells the model that, in times when it doesn't know what to do, it can pick the most reliable strategies that have worked 100% every other time. This applies geo-spatially too: if Autopilot drives too hard around a corner and you hit the brakes to disconnect, the model will remember this and send the incident to the cloud. As more Teslas continue to disconnect around this corner, the model will learn to adjust its speed for a smoother ride until nobody is disconnecting. At that point you, someone who has never driven a Tesla before, can go out and buy one and it will handle that same corner perfectly without you knowing that it was once a pain point. This is the benefit of collecting as much data as possible.
The problem with Tesla’s ‘autopilot’ is that it’s anything but.
Asking drivers to keep their eyes on the road and hands on the wheel while not steering guarantees that their attention will wander, because their brain isn’t getting enough stimulus to keep focused on the task.
I don’t know why it’s not clear to most people by now. The current Tesla ‘autopilot’ is simply more dangerous than manual driving because it harms human reaction time during emergencies.
Tesla is using legalese to blame people for this fully predictable effect when crashes do happen, but I suspect it’s only a matter of time before they’re forced to rebrand Autopilot as a lane assist technology which is all it is. Its only use as a safety system is to maintain control of the car is the driver becomes incapacitated, and safely bring it to a complete stop.
The training function is independant on autopilot being active on the car. The training happens during any drive, if the Tesla owner did opt into sharing the driving data. Actually it even happens on cars which don't have autopilot, as all Tesla cars are manufactured with the hardware. You buy the autopilot function just as a software feature.
And in the car menu, "autopilot" is actually called "autosteering" as a feature to activate. And overall it is just a very fancy cruise control, which is able to keep the proper distance to cars in front of you and keep inside your lane.
I'm waiting for the Tesla apologists to say something about how drivers are supposed to stay alert or autopilot is in beta. I've said it before and I'll say it again: Autopilot is dangerous. It actively encourages drivers to completely ignore the road and not be in a position to correct the vehicle while at the same time it has been shown on multiple occasions to happily drive into stationary objects at 60mph+.
reply