I'm surprised the NHTSA doesn't have a rule saying auto-pilot should be 100% autonomous, in control, and capable of driving in the current conditions, or it should pull over, turn itself off, and put the car in manual control mode. Counting on drivers to understand when auto-pilot can and can't handle driving conditions seems like a completely stupid idea and a recipe for disaster.
I find it terrifying how auto pilot cars give the driver a false sense of security. When a car has statistically proven claim that it is safe, drivers are more likely to have less focus on the car. It’s like comparing reaction times of a manually driving individual to that of a passenger. What seems to be 6 seconds may in reality feel like a split second due to the lack of context. I want to say that there are overhead costs to getting back into driving mode when you haven’t been fully aware of your surroundings. This happens with a much lesser degree when I’m using cruise control, but I still avoid it when there is a fair number of cars on the road.
If I have to be well aware of the warnings that the system puts out, I might as well stick to no auto pilot. The chances are low, but any chance is enough to keep me from using it. the roads are designed for humans after all.
> Clearly, using the gas pedal to turn off auto-pilot is not good enough.
It's a trade-off: if the auto-pilot is doing something unsafe, it is better to have the human intervene immediately (this might mean stepping hard on the gas-pedal, or braking, or turning, depending on the scenario, which is impossible to know in advance). The assumption is that the software/data are not perfect (yet) and the human knows what's best. I don't think "don't trust the human test-driver" is one of the current parameters, especially since that is their literal job.
Also what is the point of such an auto pilot? Seems really dangerous. Telling people you don’t have to drive but you have to pay attention ignores fundamental psychology.
This has a strong possibility of making things worse by conditioning the human driver to react with "oh, it's just the auto-pilot testing me", and thus not reacting properly when it does actually malfunction.
Wow, if I were a passenger in the second video, I would insist on taking over if a human were driving. I don't know anything about the technical issues, but this is so not ready for prime time, it seems from that test autopilot drive to be super amateurish.
>“Whether a [Level 2] automated driving system is engaged or not, every available vehicle requires the human driver to be in control at all times, and all state laws hold the human driver responsible for the operation of their vehicles,” a NHTSA spokesperson said.
This is going to be the interesting part. To my mind this is where partial automated driving systems are basically incompatible with people and cars. There is just no way a person who is behind the wheel but who isn't driving can be in control and highly attentive in the same way as a person who is actually driving.
I'm of the opinion that if the auto-pilot was a real driver, I'd like that "driver": 1) put into the passenger seat and required to simply observe, 2) sent back to school to learn the road rules, 3) re-sit the test to confirm knowledge and re-sit practical.
I also noticed a few sudden stops from the way the guys lurched forwards at stops. Getting rear-ended due to pointless sudden stops is a fail as well.
Its not ready for use. From what I've just seen from that video I'd be de-rating that "driver" from full license back to probationary or more realistically, right back to learner.
I think the main problem here seems to be that the driver has put too much trust into Autopilot and what she needs to do is rather assuming it is as just another driver assistance feature and always be in control of her vehicle at all times.
I don't trust people to pay attention, so I don't want them on the road if they require an attentive driver. I guess for testing requiring a driver is fine, but they should be taking video of the test drivers and evaluating their level of attention.
Planes don't really have so many things to collide with, so I don't think the choices made about autopilot there are instructive.
> If an automated car can handle all that safely without reverting to human assistance, what exactly can't it handle, and why is it not full self driving?
The answer is in the article, and in the definition of Level 3: the automated feature is restricted to a certain operational design domain. For Drive Pilot, this domain comprises specific sections of highway, maximum speed, and certain weather conditions. It refuses to operate outside that domain, even if technically the software might be capable of doing so.
When it detects that it is leaving or has left that domain, it tries to hand back control to the human within a short period, failing which it brings the vehicle to a safe halt and calls for help.
Humans aren't designed to monitor something that requires no action for a long time and then suddenly requires attention, nor are we good at taking over control of something at the last moment. You're not in the "flow" at that point.
The article is correct that expecting average drivers to do this without training is a high risk move. I've given flight instruction, taking over a landing 20ft off the ground is way harder than landing a plane yourself. And that's with a lot of training, not just an average driver being put in a place to supervise Tesla autopilot with no training at all.
Did any of you read the article? She acknowledged there are some cases where it’s working in well defined environments.
But she clearly shows how we are currently in the most dangerous phase “car drives except handing off to human drivers in a crisis”.
“ The policy should be that either the computer is driving or you are driving. And by driving I mean steering—people do fine with regular cruise control. The act of keeping your hands on the wheel and guiding the car’s lateral motion is enough to keep your brain engaged. So, no L3 [full self-driving, but the driver must be ready to take the wheel], which is too confusing, and no hands-free L2 [partial self-driving]. I am not against the passing of control per se, but there should just be two modes of operation, with crystal clear feedback about which mode you are in.”
"I like most of the "smart" features - lane centering, adaptive cruise, emergency braking ..."
Please drive your car.
I don't care what choices you make wrt bluetooth or heated seats or iphone integration ... but if you can't be bothered to drive the car then perhaps a different transport option would be a better choice for you.
But IMHO it's not full self driving if it requests the driver to take over even once.
If there's an insane storm or something then it's ok for FSD to know it should disable and then you have to drive 100% control. The middle ground is more like assisted driving which doesn't seem safe according to most HN comments.
1. The NTSB and FAA both found repeatedly, as have pretty much every other study - academic or otherwise - that people _cannot_ focus on watching a task without actually being involved in or doing the task. Any "self driving" model that requires the non-operator to be engaged in driving is simply not humanly possible. So the only reason for that stated rule is the manufacturers know these systems are unsafe, but nonetheless want to make themselves not be liable for their faulty products.
2. The only other "vehicle completely controls itself" system in practice is aircraft flight, where the pilots are necessarily engaged in all critical times, and for the remainder of the time all the systems either give sufficient warning before anything goes wrong (e.g. when autopilot disengages on an aircraft you have a substantial amount of time before impact - not the <=2s warning Tesla gives people), or the aircraft systems have specific extremely heavily trained signals for which there is only a single action and no immediate requirement to regain situational awareness (stall, acas, tcas, etc). The time these "self driving" systems provide to regain situational awareness and then take appropriate action is less than what we expect of trained pilots, yet it's somehow acceptable for random drivers.
The current self driving "you are expected to be in charge of the vehicle at all times" is entirely liability shifting: manufacturers are knowingly selling unsafe products, especially Tesla, they are lying about the capabilities of those products, and then they are saying it is the drivers fault when they fail.
What manufacturers are doing is no different from a manufacturer selling a car with an ABS system that fails, and saying "the driver is responsible for identifying ABS has failed and pumping the brakes if it does, if they fail to do so we do not accept liability for the ABS failing"
They certainly mention safety, but don't seem to suggest that the safety is due to the driver overriding the auto-pilot in emergencies. I do agree with your point, but they could be making the argument that the danger prevented from when the system is operating nominally will be much greater than the danger caused by the driver handoff in low confidence scenarios.
If the auto pilot calls for the driver to be on alert and place their hands on the wheel and the driver doesn't, surely the auto pilot should stop the car
I don't get why people even consider driving hands-off on any of the autopilot-like technologies around. Maybe in the cleanest possible conditions on a highway in slow moving rush hour traffic; and even then hands on the wheel expecting the worst. Do folks really have that much faith in technology? :-O
I'm surprised the NHTSA doesn't have a rule saying auto-pilot should be 100% autonomous, in control, and capable of driving in the current conditions, or it should pull over, turn itself off, and put the car in manual control mode. Counting on drivers to understand when auto-pilot can and can't handle driving conditions seems like a completely stupid idea and a recipe for disaster.
reply