It seems like a good intermediate solution to this problem would be for autonomous vehicles to simply know when it's not possible for them to drive safely, and to require manual control in those scenarios.
There's a large and perhaps unsolvable problem of the state of the art solutions that doesn't get nearly as much attention as it should - even if the car can safely handle 99% or 99.9% of the situations, the algorithms are not really able to detect if they're in the last 0.1%. The car will slam into stationary trucks, suddenly brake for no reason, consider the tollbooth to be a bus - all with utmost confidence it's doing the right thing.
In other words - the value of proposition of autonomous driving would be completely fine if the car gives up on unarguably difficult situations and alerts the driver to take over. As of now, however, the car will boldly do something stupid and dangerous.
Of course those need to be taken into consideration but to be fair, aren't most of the conditions you listed dealt with by a human driver pulling over and waiting it out/calling for assistance? That's exactly what an autonomous system could do in those situations.
And if there _are_ dangerous conditions that a human driver could navigate but an autonomous one would give up on, is that so bad? The industry is making enough efficiency gains to offset that.
Most autonomous vehicle companies are approaching the problem backwards. They're trying to build a vehicle where the computer is driving until it gets confused, and then asks a human (either in the vehicle or remote operator) to take over with little notice in an emergency. But we could get greater safety benefits sooner by having the human driver always drive with the computer overriding control inputs when it detects an unsafe situation. The basic concept is already proven with stability control systems and, more recently, front collision avoidance (automatic braking) systems. We should focus on expanding and extending those systems to cover more situations and the full range of vehicle control axes.
I'd actually go further. An autonomous driving system that works for routine driving pretty much has to be designed on the basis that the human "failsafe" will NOT be paying attention and will not be prepared to take over on short notice. Heck, enough people don't pay close attention to their driving today without self-driving cars.
The most obvious intermediate stage is designated sections of highways in which self-driving cars can operate without active human drivers. The question though is whether that's an interesting enough use case to push through all the legal/regulatory/etc. changes that would be required.
It's fairly obvious that cars cannot think at the level of humans and are at a disadvantage sometimes. We also don't need fully autonomous driving to prevent careless mistakes.
A better approach would be to focus on making more and more improvements to level 1 autonomy. The driver would be hands-on all the time, but the computer would intervene as necessary to prevent collisions. We already have some of that with forward collision prevention systems, blind spot warning, stability control, etc. Just keeping improving those systems by adding more sensors and authority over more vehicle controls.
Level 4+ autonomy is only going to be viable on a few limited roads for decades to come. The AI and sensor challenges are tremendous. We shouldn't let that long-term goal distract us from real safety improvements that could be realized relatively quickly with minimal technical risk.
Yes, but for self-driving cars, the problem seems more surmountable. We already have cars that can drive in real traffic without intervention a large % of the time. There are plenty of edge cases, but they don't seem insurmountable.
I've had a few harrowing driving experiences and fortunately, no serious crashes. The difficult driving experiences where just so rare that I would be surprised if a self-driving vehicle would know what to do.
* I once had a car driving way above the speed limit approach me from the rear, it passed me by jumping the curb driving on grass and sidewalk and swerving back onto the street in front of me.
* Another time I was driving back to a rental house in the Aspen area and the snowstorm had gotten bad enough to completely cover and obscure the road on the hillside I was driving on. My wheels went off the road, fortunately on the side away from the drop off and I was driving slowly, but then I couldn't get back on the road without using enough power that I feared I would swerve off the road on down the hill. This required a bit of puzzle solving before I could safely get back on the road.
* I've had bad GPS data that kept me circling my destination without every getting me there.
I just think that such unusual situations are going to be difficult for autonomous cars to understand very soon.
One solution to bridge the gap between 'mostly' self driving and 'totally' self driving is to increase the safety systems.
Currently, safety systems for the people in the car are very good. But for people outside the car, they are very bad. If you can make it such that the car is very safe for all the people in a possible accident, then maybe the AI problem won't be such a problem.
Granted, that's a hard problem too, but we're pretty good with the theorems and modeling that goes into safety. It's more of an economics problem, not a design one.
Interesting as something to explore, but I'm curious if a network of self-driving cars can be optimized to recognize and avoid these situations altogether.
Starting to think that autonomous driving should be limited to roads where people are already restricted from being near (e.g. freeways) until the technology is (rigorously) proven safe. Doesn't seem like it would take much to implement on top of the current technologies in the wild.
I'm pretty skeptical we'll see autonomous vehicles--at least outside of limited access highways or other relatively less difficult scenarios--sooner than decades from now. But your suggestion is an impossibly high bar. There will always be failures because of debris on roads, unpredictable actions by human drivers/cyclists/pedestrians, weather problems (e.g. black ice), mechanical failure, etc. that will result in some level of fatalities.
I think that the necessity of intervening drivers atm indicates that we aren't at that point yet, even if that point is far from perfection, and also that the reason any self-driving cars are on the road is because of the fairly loose but significant requirements from regulation. We might be at that point in otherwise very dangerous situations, like if I was very tired or drunk, but otherwise I don't know that I'd have so much faith in software engineers to completely control my car.
AI has gotten really good but will it ever develop the necessary imagination to be able to handle all edge cases? If we ever do reach that point, we should require all cars to be AI-driven only so that bad drivers don’t injure themselves or others when they get into these kinds of situations. Until then, something needs to be done before too many people kill themselves by thinking that autonomous cars are more capable than they actually are.
I think much of this can be circumvented with real time data from apps like Waze.
However if the car is not able to find an alternate route, it may require the driver to take over, so it wouldn't be fully autonomous.
In either case, the car has to be able to reliably identify conditions in which it is necessary for the person to take over, which is definitely far away from being solved at a level that would be required to meet consumer levels of safety.
reply