"I like most of the "smart" features - lane centering, adaptive cruise, emergency braking ..."
Please drive your car.
I don't care what choices you make wrt bluetooth or heated seats or iphone integration ... but if you can't be bothered to drive the car then perhaps a different transport option would be a better choice for you.
The most optimistic take I can give is that even if you fixed all of the jerkiness and all-around mistakes in the driving situations the author points out, you're still left with the problem of being unable to reasonably guarantee what the car is going to do next. And so you either just trust the system implicitly, or... what?
So much of training in dealing automation (achieved in a far different way) in airplanes, for instance, is about knowing you can reason effectively about what the automation is doing and why it is doing it, and knowing how to recover in case something happens that you don't like. Further, the use of those technologies hinges on this very important concept of making sure that at all times you can stay ahead of the automation, and when you can't, it simply isn't a scenario where it is to be trusted.
That's been the case for many, many decades in aviation and overall the results are very, very good. Yes, from the automation side the problem it is tackling is much simpler, but I think the easiest part of it is that the need to have the system take split-second decisions is REALLY, REALLY small. That's just not a thing that happens when you are flying. You can trust the automation, and have it be an ally, because the possibility that at any given time it'll decide to cause a plane-crashing maneuver is simply not a possibility.
I don't see how that's so easy to avoid in the car scenario. The multiple drivers in the intersections, pedestrians, etc., cause you to CONSTANTLY deviate from your planned course of action in a moment's notice. Anyone who has driven for any stretch of time has had their ability to quickly change course called upon very, very frequently. We seem to be focused on how we can get cars to do the same, but I sincerely wonder if whether or not that would be sufficient. Even if your car could drive brilliantly, but you knew that at any given time it could not, could you really relax and trust it? One could say its not different than being driven BY SOMEONE else, which is a fair point, but in that case I can very effectively reason about the driver making those decisions for me (i.e. I have folks I think are unsafe drivers so don't get in their car, etc.). But what about for machines? I do think that communicating to drivers the safety of what the car is doing will prove to be the ultimate challenge, beyond navigating tricky road conditions.
I'm not sure that aviation automation is directly comparable to the current state of driver assistance features.
You should be more concerned with what I'm doing bluetooth than whether or not I let the car keep a safe following distance from the car in front of me since if I'm in a heated discussion with my ex on the phone, I'm paying a lot less attention to the road than I should be, and I'd be better off letting the car do most of the driving.
None of the driver assistance features I listed above allow hands-off driving, if ACC fails, my car will slow down, or maybe get too close to the car in front of me, but since I've already got my hands on the wheel and looking ahead, it's not a big deal.
The poor state of automation for most cars actually ensures better driver attention -- LKAS works around 80% of the time on the freeways. If it worked 99% of the time, I'd be less focused on driving.
> Clearly, using the gas pedal to turn off auto-pilot is not good enough.
It's a trade-off: if the auto-pilot is doing something unsafe, it is better to have the human intervene immediately (this might mean stepping hard on the gas-pedal, or braking, or turning, depending on the scenario, which is impossible to know in advance). The assumption is that the software/data are not perfect (yet) and the human knows what's best. I don't think "don't trust the human test-driver" is one of the current parameters, especially since that is their literal job.
I am very dubious about other people's driving skills, I tend to assume everyone else on the road is out to kill me and will do the dumbest thing possible at any given moment.
But even so, I am also an experienced software developer, and I know that software is only as good as the author(s). Bugs happen. It's inevitable. And I don't want to die or be injured because of software errors. I'd rather it be human error.
Now you might say to this, "Planes fly on auto pilot constantly. Every time you fly you're basically in the hands of software." And this would be true. But my response to that is:
1) The air is much less densely packed than the roads and highways.
2) In the air, even though you are traveling much, much faster than in a car, the pilots have more time to react to a problem than a driver in a car.
3) The pilots are highly trained, experienced and hopefully alert. Drivers in automated cars will be complacent and texting on their phones.
I think this is a terrible, terrible idea and misuse of technology, despite the fact that humans are shitty drivers. I think it's only going to exacerbate the problem, not improve it.
Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.
Everybody who's looked at this seriously agrees. The aviation industry has looked hard at that issue for a long time. Watch "Children of the Magenta".[1] This is a chief pilot of American Airlines talking to pilots about automation dependency in 1997. Watch this if you have anything to do with a safety-related system.
I wouldn’t trust any driving automation to handle a situation that I couldn’t handle manually, especially if it relies only on sensors equivalent to what I see.
These automations are extremely useful to relieve long and easy drives. Not to handle difficult conditions better than a concentrated driver.
Interesting. I didn't realize humans have already attempted so many different automated systems. In the talk you linked, he points out how, historically, systems are never fully automated because humans are not comfortable with that. He seems to predict that cars will not be the first systems that we allow to become fully autonomous, and admits he could be wrong.
It's surprising to me that he feels texting should be permissible while driving with a driver-assist system [1]. His whole argument here doesn't add up for me. I'll have to check out his book.
It sounds like he supports driver-assist mechanisms, though maybe without Tesla's beta release schedule of OTA updates. He seems unconvinced that full autonomy will ever be achievable.
I think I still believe in Google's plan, despite the shadow he casts on full autonomy. Zero deaths and full disclosure seems better than selective reports and the possibility for more fatal accidents.
Given there are now a decent number of investors on both sides, I imagine the debate will continue as to which is the best path: testing driver-assist in consumer vehicles, or testing full autonomy with company-controlled cars.
I'd like to hear what Mindell thinks would be the best hand-off from computer to driver in a driver-assist vehicle such as Tesla's. He answers a question about that generally here [2], but is mainly talking about airplanes where there is a chance for a smoother transition. How does one slow down time such that a hand off is possible in a vehicle during an adverse event that the car cannot handle itself? I feel that is the most pertinent question to today.
It seems like the car needs to see the adverse event coming, but since it is not ready to handle the event, the car is unlikely able to give the driver much extra time. By definition of adverse event, it would now seem to have been better if the driver had been paying complete attention the whole time.
They're just talking using the same technology to add back-up responses and leaving the driver primarily in charge, instead of making the technology primarily in charge with the driver only there as a failsafe. So they're advocating automatic emergency braking, lane departure warnings, etc. rather than autosteer and adaptive cruise.
Have they pulled back on the marketing vision that you say “Car! Take me to the beach!” and the driver just sits back and watches Netflix? I really feel like that image does this technology a disservice, since it could conceivably improve safety in some circumstances. But for a driver to just completely abdicate situational awareness is pretty reckless even with a high degree of automation.
Driving has resulted in innumerable deaths. In most cases the auto-pilot feature increases safety. In a very small number it decreases it. Life's about trade-offs.
What he's saying is that it's not good enough to be able to control the car 90% of the time. Either it needs to be robust enough to operate safely without human intervention 100% of the time or it needs to somehow enforce that the driver is alert and capable of taking over 100% of the time.
We can't have an autonomous car that expects a driver to take over in a dangerous situation if that driver hasn't had to maintain control the entire time. For instance, there are youtube videos of drivers moving to the passenger seat in a Tesla with autopilot on.
> Maybe the software integration helps avoid this?
With auto pilot on the car is watching you. If you take eyes off the road it issues more pay attention nags. Failure to comply removes FSD Beta. So you have a feedback loop where paying attention becomes more important then your phone.
That just sounds dangerous. A system that takes care of a lot for you, until it doesn't. And then you have to step in at a moments notice, even though you might not be paying enough attention to react fast enough. I think there has been cases of airline pilots' senses being dulled by all the automation, and then them stumbling to fix things when they have to step in and do its job manually.
It seems to me that self-driving cars has to be a all-or-nothing deal.
Obviously this vision is compelling. I'm confused by the decision of their prototype car to not have manual control overrides (e.g steering wheel or something similar). Air travel has been revolutionized with autopilot, but there are clear overrides for safety in case systems crash. I don't think we need to be wed to the pedal + wheel paradigm - but having a manual override option seems critical to safety.
I'm surprised the NHTSA doesn't have a rule saying auto-pilot should be 100% autonomous, in control, and capable of driving in the current conditions, or it should pull over, turn itself off, and put the car in manual control mode. Counting on drivers to understand when auto-pilot can and can't handle driving conditions seems like a completely stupid idea and a recipe for disaster.
Circumventing the attention monitoring systems isn’t the issue. Those only exist for the purpose of liability shifting, and serve no other purpose.
We _know_ from innumerable studies, and just direct experience in aviation that saying “this will be done automatically” but also requiring absolute attention is not something humans can do. The only people trying to claim to the contrary are the self driving car people, and only so that when their self driving system fails they can say “the person in the driver seat is the person actually driving and it’s their fault, not our faulty software”.
This is without considering their other failure mode: disabling without warning, often immediately prior to crashing. Autopilots in aircraft can do “something’s wrong I’m disengaging and warning the pilots” because any time the autopilot is engaged and the pilots are not actively engaged with flying the aircraft they’re at altitude and have significant amounts of time to react. The few events that do require immediate responses have extremely constrained responses: essentially push down or pull up (ground proximity, stall, or TCAS alerts) - none of which is the case for road vehicles.
The time on a road vehicle between something going wrong, and impact, is generally less 3 seconds (I recall Tesla’s giving 2 seconds of notice). In that time the now driver has to become aware of the change in state, develop situational awareness, and then start to react. Then the actual correct course of action has to actually complete, which also takes time.
IMO self driving system that hands off control of the vehicle less than 10s prior to an accident occurring that it is responsible for is the fault of the manufacturer. Obviously a self driving system can’t be held responsible for an accident caused by another vehicle.
If the manufacturer’s self driving system is unable to recover from whatever is going on, it’s reasonable to give up and offload to the human in the driver’s seat, but it not reasonable to then say that they were responsible for an accident if they are unable to recover.
Again, all these nonsenses trying to avoid being in charge of the car are simply symptoms of the actual problem which is that self driving cars today are unsafe. It’s obviously fairly awful of those people to knowingly circumvent safety systems of something that they are aware is already unsafe, but it’s the manufacturers fault for selling a product that is unsafe in the first place.
Please drive your car.
I don't care what choices you make wrt bluetooth or heated seats or iphone integration ... but if you can't be bothered to drive the car then perhaps a different transport option would be a better choice for you.
https://www.youtube.com/watch?v=5ESJH1NLMLs
"... as we look at this accident history what we find is that in 68% of these accidents, automation dependency plays a significant part ..."
"... automation dependent pilots allowed their airplanes to get much closer to the edge of the envelope than they should have ..."
reply