Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

What about dirt forest service roads? What about the dirt parking lot at that wedding you just went to? How will it find a parking spot? How will it get into a ferry where people direct you to the correct spot?

How will it deal with an accident up ahead where some drunk bystander is trying to direct traffic? How will it know to ignore the drunk guy? What if it isn’t a drunk guy but a sober person directing traffic? Does the car obey in that case?

None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.

Don’t even get started with liability. Once you take away the steering wheel the manufacturer is on the hook for every single mistake and every single accident. You’d be insane to be a manufacturer and sign up for that.

Sorry, but self driving cars are a complete fantasy.



sort by: page size:

I really hate to be that guy, but I think my biggest worry is how each of these car manufacturers will handle such edge cases.

Each car manufacturers autonomous driving is a black box, we don't know whether said edge case is handled or not, or what action the car will take. Maybe now's not the time but it would be nice to see a deterministic rule book that says, for this edge case, the system will react this way. We can't account for all edge cases but some default fall back would be nice.


Exactly. It's relatively easy to build a self-driving car that works well on good, well-marked highways in good weather in the daytime when every other driver on that road is acting responsibly. But that's just the base case, and the edge cases are wickedly hard.

Tesla seems to be making progress on full self driving, but real self driving (cars without steering wheels) still seems quite distant. How will such a car respond to road workers redirecting traffic? How many times have you had to talk to someone outside of your car to obtain instructions on how to get around some obstacle, like a moving truck. I don't do such things every day or even every month, but there are occasions where it is necessary to take unmarked detours.

How would a self driving car get through heavy fog or snow covered roads when there are only very difficult to decipher hints to the road edges?

How will self-driving cars deal with humans that can perfectly predict their actions? I believe that bad drivers will take advantage of self-driving cars by cutting them off and failing to yield when they should.

Because human drivers have a sense of self preservation we can break the rules when we are about to be car-jacked or see an impending collision coming. Imagine how easy it is to obstruct a self-driving car for nefarious reasons.

I'm confident that, eventually, self-driving cars will address all of these issues to some degree, and that will mark a turning point where it is better to leave driving to the cars than average drivers. However, this doesn't mean I'm going to put my $100K car into a taxi pool to be used by (sometimes drunk) strangers; I don't buy the argument that the cost of buying such a car will pay for itself.


Add it to the infinitely long list of edge cases that autonomous vehicles will never ever be able to handle properly

If you've spent enough time building software you learn that the real world almost inevitably introduces edge cases that even the most careful testing couldn't foresee. Self driving to date has proven itself not to be an exception to this.

I just can't imagine how Mercedes could be confident in a highly complex (self driving) system, designed to manage an insanely complex and unpredictable (driving) environment with no real-world tests. Maybe that's coming from the marketing/PR department and not engineering.


Talk about pointless edge case. It's so incredibly rare, you can just have the car protect the passenger and that's fine. My biggest concern about self driving cars is the ability to mess with the image recognition software. How hard would it be to flash a piece of material at a lidar/radar and confuse it that so much that it either crashes / goes into DOS / or thinks a minivan full of babies is cruising down the road towards it.

Two thoughts from your comment:

1. Agree on the edge cases, to do real driving we'll need AGI or some whole new way of looking at it, its not possible algorithmically. But, I would pessimisticly draw on other examples of how tech handles this problem: if the tech can't do it, it's not something people get to do. Try getting help from a chatbot, or getting help from uber or Amazon on a situation that doesnt fit the FAQ. So I expect self driving cars will be great for a range of things that young tech-savvy people want to do (and that google et al want you doing) but won't drive you to the hospital in a snowstorm and will balk when they realize they have to turn into a rutted backroad at night, because why would anyone live there?

2. More frightening, there are people (see one of the siblings) that think nobody should be doing "dangerous" things like driving in a severe storm, that are beyond the range of a self driving car, and want to force their value judgement on others. So expect there to be calls for outlawing certain activities, for our safety.


How do you know where there is gravel? Why do you think software can't know it as well?

Actually, how do you face any particular situation X? Why do you think the machine can't handle situation X?

Can't handle every situation, you say? Then just put the sensor's on manually driven cars, and build a corpus. It won't be foolproof, but I bet my arm it can be much more reliable than a human being.

---

Your objections only work with current technology. One day, you will see that self driving cars have less accident than human driven cars. And soon you will be faced with a tough choice: let the car drive for you, or pay more money.


The human who drives a self driving car now should know what he's doing. These cars are prototypes now and whoever takes them to the road should be responsible.

The edge cases are a show stopper: You are talking about something like "driver assist" or upgraded cruise control. Thing is, still need a human right there.

Much of the dream of self-driving is for no human driver there, for taxi cabs, school buses, 18 wheel trucks, local deliveries from pizza or Chinese carryout, USPS, UPS, or FedEx, etc. For that, for current technology, for current traffic, on current roads, there's no hope at all because the edge cases are way too common in practice (can't put up with mean time to destroying an 18 wheel truck of six months -- 5 million miles is more like it) and require full, wide awake, sober, mature human intelligence with full ability at reading, talking, understanding, natural language understanding, hand signals, flag signals, tough to read road signs, etc. Edge cases.


Let's not forget that all of the things you list are things that humans are pretty bad at. A self-driving car doesn't need to perfectly handle strange situations, it just needs to handle them better than most people, which is a much lower bar.

Not saying we'll get there anytime soon, I don't know well enough to say. But I find that people in these discussions tend to overestimate the skill of most human drivers on the road.


I'm as incredulous as the next person that a mostly self-driving car will be available by 2017-8 and that this will be it.

That said, on the way to full autonomy I can foresee a car that can handle 99% of the driving itself but calls on the driver to help out during the other 1%. In such a scenario the car may know where it is and isn't safe to drive, but not exactly where, or when.

Examples include the 1-lane, 2-directions-of-traffic rural roads. Unmapped car parks and private premises. Navigating around roadworks. Basically all the 'edge-case' scenarios people often cite.

The car could guess the route and you use a joystick to control the forward speed. You can use the joystick to change the proposed route as you drive. And if you wanted to leave the detected 'safe area' ("no, I really do want to drive into that car - it's a tow truck and I need to get on it!") you acknowledge a bunch of ominous warnings and control the car directly, albeit at a very low speed.


But unfortunately the real world is full of poorly designed, under-maintained roads driven on by drivers who are looking at their phones or have mechanical issues without warning. If the self-driving cars can't handle it, then they shouldn't be on the roads. The roads will never be perfect.

I don't think any of those examples are edge cases. The first set are normal traffic conditions that in the context of self-driving cars are easy to solve, especially in narrowed conditions such as on a highway. Moreover, mid-range cars already have collision warning and automatic braking systems. As to your example of traction issues, pretty much every modern car that I'm aware of has had computer assisted traction control systems for a while now.

The edge cases that are difficult essentially boil down to entity recognition; unexpected and moving obstacles, road sign changes, traffic light outages or alternate signal pathways and the like. Some of those definitely would require government level coordination which is about a lot more than technology.


It isn't the normal driving that makes self-driving cars so difficult. It's the edge cases that will be multiplied significantly when a large fraction of the cars are self driving. It will take decades, not "5 years from now" (which I have been hearing for years) to get these systems to work.

Yeah you don't even need to go to that extreme. I'm yet to see a self driving car handle the extremely common situation of meeting an oncoming car on a road that is effectively single lane because of cars parked on the side. Or just, and road layout more complicated than US-style grids.

I wrote it carefully and I stand by it. If it can't deal with it because your system is getting too confused for any reason, you don't have a self-driving system. Being able to function enough like a human that the safety features on the car don't produce an unacceptable result is a bare minimum requirement to have a self-driving car.

This isn't horseshoes, as the old saying goes. I unapologetically have a high bar here.


There's also the practical limitations to consider. Self-driving cars have enough trouble identifying what's ON the road, never mind what's off to the side. Is it a cliff, a crowded sidewalk, a crash barrier, a thin wall with lots of people behind, an empty field, trees, a ditch? etc etc.

As long as human-driven vehicles are a significant fraction of the vehicles surrounding an autonomous vehicle's miles driven metric, I'm not sure a technically-driven approach to edge cases is gating mass adoption. Humans likely don't have a very good track record with those edge cases, but the ones who deal with the aftermath of all those edge cases on an actionable basis are the legal and insurance industries. Luckily, they have copious documentation of those edge cases; unluckily, that documentation is not in a readily-utilizable format for computers.

Autonomous software development might finish 90% of the technical challenge, only to discover 90% remains to encode legal and insurance precepts before mass adoption is allowed because the safety outcomes bar to clear for autonomous driving will be higher than manual driving. Once you reach edge cases where the only physically-available options are all bad outcomes (where either riders get hurt, or people outside the riders get hurt), until sensor and simulation technology get good enough to evaluate the degree of how much each party gets hurt to discover the minimal injury option, we might be forced to make do with developing software that chooses an option that presents the least risk exposure to legal and insurance liabilities according to the governing jurisdiction. If so, then that would be a real mess to wrangle.

Hopefully, legal, insurance and regulatory frameworks around the world will recognize in autonomous vehicles with LIDAR+radar+sonar+vision+sound sensors, with the kind of testing the big players are performing, mortality per million miles driven are bound to be lower than human drivers, and not engage in a perfect-enemy-of-good position.

If politically-influential stakeholders put up barriers to entry in the form of demanding better-than-human perfection in outcomes, then we could wait a long time as players switch to alternative go-to-market plans. One possibility might be embed the sensor and software technology into manually-controlled cars as accident mitigation features, but simultaneously collect massive amounts of data to refine the edge case handling characteristics, and gradually ease into self-driving.

next

Legal | privacy