Think about it. Soft AI that assists the driver (automatic turning, cruise control, automatic brakes) is now mainstream. Why is it that full self-driving cars are categorically different? Because it requires full faith in the machine. When assistive technology goes wrong, the driver can correct it and -- get this -- the reason the driver can correct it fast enough is precisely because the driver doesn't fully trust the technology. the driver is on alert, always, because they know the technology is not meant to be trusted fully. now a full self-driving car asks the driver to trust it fully. so the "driver" can take a nap, read a book, whatever. if the driver needs to be on alert, it's by definition not a fully self-driving car. and i dont think we will get there, ever.
Perhaps, but there's a big difference between not trusting machines and not trusting them to be perfect. If people don't trust machines, it's because they're surrounded by machines that break and malfunction constantly, systems that don't work, networks that fail - and autonomous cars implies a level of faith in all of that working flawlessly most of the time in spite of demonstrable reality.
While I object to autonomous cars mostly because they remove human autonomy and implicitly limit freedom (as a tradeoff for safety and greater efficiency), I think it would be a bad idea in any case to come to the point where having a steering wheel isn't even an option.
I think the problem with full self driving is that its written from an egotistical viewpoint.
What i mean by that is that when i am driving, i have to take into account the people around me, their vehicle capabilities and more.
For instance you are not going to overtake the car to your right, if you see in your side mirror that there is a junior in a blackened out beamer pulling up in high speed. You will prepare. The AI just sees another car on the right and expects it to behave "normally"
Its either that, or they all communicate intent over wireless tech to each other.
This is an odd way of looking at it. Far from being a problem for self-driving cars, the development of ever-more capable assistance and warning technologies is the rational way to go about refining the technologies that will be needed for fully self-driving cars.
This situation is only a problem for those manufacturers who want to pass off partial autonomy as the real thing.
we don't just let people drive because they can pass a road test. There's probably 12 year old's that can pass a road test with enough training.
We let people drive because we know that they are fully capable agents able to make decisions even in case of failure. In other words, they are responsible, reliable, fully cognisant individuals, far beyond the technical ability of operating a machine.
A self-driving car is like a parrot trained to speak. Not only is it not an agent aware of its larger surrounding and potential implications of getting into unknown situations. It also has no real understanding of what driving is at all, it's even stupider than a pet. And because its software is usually deployed to thousands if not millions of devices at the same time, there is little to no variance to protect from extreme failure.
Agree 100%. This back and forth between fully-autonomous and partially-autonomous highlights the largest obstacles for "self-driving" car adoption.
The fully-autonomous route runs head-first into the inevitable, messy problems with software: security, maintenance, feature creep, complexity, unreliable networks etc. In theory, humans could nail all of these in car software. In practice, it's highly unlikely. The software industry can't even get basic security for IoT devices right.
The human-in-the-loop route runs directly into the only problem more daunting than reliable software: human nature. Human overrides for self-driving cars that need to be activated in a timely manner will not work. The drivers won't be paying enough attention to react in sufficient time. Note that overrides add even more complexity to an already fantastically complex system.
I agree. This is a slippery slope that I (think I can) accept. Perception matters. And that way the car (uhm - the technology?) gains trust slowly - and adapts to our preferences. It's one thing to talk about assistance systems taking more and more control (but being helpful in general, leaving me in control most of the time so far). It's a different thing to talk about self-driving cars that are tested today (even if they're far from ready yet). I don't want to hand over my current car tomorrow and enter a kind of cab that goes 130 max even if there's no traffic. Where I am nothing but a guest.
Do these lines converge? Possibly. Or certainly, even. But that needs to be a loooong process in my world and I prefer the route I described: By making the car I know and like smarter. Not by replacing it outright with a one-wagon-train or automated cab.
(and that's from someone that considers anyone not driving automatic already being much further down the road)
As everyone knows by now, literal full self driving (as in get in your car, tell it to take you to the other end of the country and wake you up when it gets there) is entirely out of reach to current technology, and will stay out of reach until we design new sensors and possibly general AI*.
So, the current goals must be to achieve something similar in certain well defined limited conditions, and with reliable automatic checking that you are still within those conditions - hopefully conditions that one is actually likely to encounter. Until we have that, letting self driving cars on public roads is a menace.
Current self driving cars are at best at the level of a driver going through their first driving lessons, and one with very bad eyesight at that. Having a human act as the driving instructor, theoretically prepared to step in whenever the AI makes a silly mistake, is not enough to make these cars as safe as the average (non-drunk, non-sleep-deprived) human driver.
What Mercedes seems to be doing is responsibly pushing the state of the art further. Having a car that is safer than a human driver without depending on your constant vigillance is a huge step forward. Obviously, this only works in certain conditions, but the car itself detects when those conditions are no longer met, and gives you quite ample warning to assume back control.
* Elon's shameless lies about having your Tesla act as a self driving taxi and generate a profit for you while you work in the coming years have well been put to rest.
The core point is you can't go smoothly from a driver assist to fully autonomous. They're just completely different from the point of view of the reliability of the software you need. A huge amount of things with driver assist you can write off as 'the driver will take over'. There's not a specific part of you can point to and say 'that's what will stop them', so much as the entire design of the system needs a different level of consideration because it becomes so safety critical, as opposed to safety-enhancing (autonomous driving is probably the most complex safety critical design ever attempted, by at least an order of magnitude. Driver assist is way easier by comparison). It affects the attributes of the sensors, the design of the neural networks interpreting those sensors, the high level design of the decision making of the whole vehicle, the low-level design of the electronics, basically everything.
Tesla hasn't put forward a plan which indicates their full autonomous development is going to be anything other than a continuation of their driver assist work (and it's certainly frequently and incorrectly stated that the state of their driver assist puts them ahead in the race to full autonomy), and if they want to be credible to those familiar with the challenges involved, they need to actually show something more than that.
I used to believe in the self driving hype. Then I learned ML and started working in related areas. Now I know full self driving is not happening, not with the technology/algorithms we have. So what ever anyone does is just an approximation and we will never be able to trust the car will handle any situation thrown at it. So its always going to be a souped up cruise control, the likes of which Tesla and others are selling. We need some kind of breakthrough to get fully autonomous self driving where we can sleep in the car while it takes us to the destination.
I would, of course, love to hear if folks here think I am wrong.
I think its incorrect way to view it like its either full self driving or none at all. We are getting incremental benefits from this already: cars are correcting and preventing driver errors. They make instant trajectory corrections or complete stop and prevents a huge crashes. With time they will be better and better at recognising traffic lights, road signs, sudden unforeseen situations and so on and that way driving safety will improve exponentially even before self driving capabilities.
I wonder if it is imprudent to create too many vehicles that have partial self-driving capability before we have ones that can do 100% of it. Instead of to the point where you can take your attention away from the road, but not to the point where it can reliably avoid accidents on your behalf. Too many people aren't wise enough to use the partial autonomous technology without abusing it.
* People talk about how automatic cars have been extensively tested. That's not really true. There's only a handful of them on the roads and they've been tested in limited conditions, with particular limits placed on suburban driving (slow speeds, etc.). I suspect there will be entirely new classes of bugs when there are hundreds of different AIs competing on the road, especially when mixed with normal human traffic.
* LIDAR does not work in poor weather conditions, including ice on the road.
* In terms of AI, getting to the 90% point is relatively easy, but it seems like the last 10% would require something resembling true intelligence. How do you deal with a lack of markers on the road? Dangerous situations in shady areas? Sensor equipment getting damaged and feeding your car wrong information? Animals and children running out last-second into the road? Severe obstacles at a distance? Tiny, cramped residential roads with non-sidewalk pedestrian traffic? Accommodating emergency vehicles and police? We shut off our brains when driving 90% of the time, but that last 10% does require human levels of intelligence. Furthermore, unless every car on the road is automated, the AI in automatic cars will need to deal with the idiosyncrasies of other human drivers. Sudden merges. Speeding. Road rage. Unexpected emergencies. Tailgating. An AI can't send and receive feedback to other human drivers and pedestrians; it can only use its model of human behavior to predict likely outcomes. (And this is saying nothing of other AIs on the road, who might behave completely unlike human drivers.)
* People say that automated cars only need to drive better than human drivers, not perfectly. I don't know if that's true. If automated cars drove perfectly 99.9999% of the time but then crashed horribly that remaining 0.0001% — taking some poor pedestrians or bicyclists along with them — I wouldn't get into one. And I don't think they'd be street-legal. Most people don't want to think about human lives in terms of numbers; they'd rather have control over their actions and accept the inevitability of occasional accidents, rather than having a machine that's practically guaranteed to eat up human lives every so often.
* Speaking of which, how does the car decide who gets to live in a life-or-death situation? Are there situations where the car would elect to kill the driver? Which programmer gets to make that decision? I'd like to know this information before getting into my car, please. The idea of offloading split-second moral decisions to an AI seems like it should be severely legislated.
* People talk about mesh networking improving traffic and whatnot. I can't even get my USB devices to work across OSes half the time, and we're talking about sophisticated traffic control across multiple manufacturers? Especially given the quality of software that car companies tend to put out?
* Self-driving technology is extremely expensive, and most cars on the road are pretty cheap. You'd have to have some sort of insane subsidy program to get more than the 1% driving automated cars.
* People want to get in their self-driving cars drunk, but I think there will be severe legal hurdles in the way of that.
I think semi-automated and AI-augmented driving are certainly possible. Highways are easy enough to tackle. I could see some expensive cars getting that capability over the next few decades, as well as maybe cargo trucks. But endpoint to endpoint driving, where you could snooze on your way to work? I don't think so.
A lot of people don't really oversee their car's driving 100% of the time today because they're fiddling with their phones or whatever. It's completely unrealistic to expect that anyone will actively monitor their environment if there's an autopilot that works reliably the vast majority of the time. Just won't/can't happen--and has been shown in research. Indeed, why would I even want a fully autonomous driving system if I have to drive the car anyway?
Today saying cars have "self driving capabilities" is like saying you're fluent in 3 words of a language. They have advanced driver assists but the insistence on the "self driving" terminology tricks enthusiasts and less tech savvy people alike into a false sense of confidence in the tech. Sometimes all the way to their deaths.
Mindel isn't saying full automation can't or shouldn't happen for cars. His point is empirical and cautionary: usually when we get excited about automating X, it turns out that X is better with some human supervision. It's not really a contentious claim. I think many proponents of self-driving cars would be ok with some human input – steering wheels, changing routes, feedback on driving performance, etc
Yeah it's been really odd to see the take that self-driving must require strong AI. It needs to be done carefully, but it's clearly a manageable engineering problem if you have good sensors.
It's hard to tell exactly when fully autonomous vehicles will be ready, but I definitely think it's safe to safe to say that there is not going to be any such thing as a computer-assisted human driver in any significant numbers.
If the intended users are anything else than specialists, it's very rare that anything that is not either fully automated or not noticably automated works out. In fact, even if the user is a specalist - such as a pilot - it often doesn't work out very well. People will misunderstand the system, trust it too much, trust it too little, rely too much on it.
The only solution is full automation. Noting else is going to work.
While it's certainly hard - and remember, people don't manage that either, at least not as safely as we will require self-driving cars to operate - I don't see why it'd need "full-AI". Recognizing the road is not a matter of general intelligence, it's a matter of having very good but very specific recognition abilities.
Besides, remember that a self-driving car won't have to operate with just a couple of eyes stuck inside a cabin. Lorries won't blind radar.
reply