So much wrong with this logic: "If autonomous cars are even just 1 percent safer than humans, we should still be using them." Does that "humans" include drunks, idiots, and teenagers? If so, I'm sorry, I would never use that car. I'd like to think I drive 2x safer than the average idiot out there.
I took that statement to refer to the cumulative effect of autonomous cars -- Those people you describe are already on the road with you, whatever car you're in.
There is a great proverb that reads "man who says it cannot be done, should not interrupt man doing it."
If you removed Tesla from the equation, Level 4 autonomy would be half a decade away... it's pretty sad how far behind the industry is from this perennially cash strapped upstart.
> If you removed Tesla from the equation, Level 4 autonomy would be half a decade away... it's pretty sad how far behind the industry is from this perennially cash strapped upstart.
This is simply untrue. Volvo will have a release this year.
The notion that Tesla are the only ones innovating because they're the only ones using the public as guinea pigs is hogwash. Mobileye, Tesla's original self driving technology, isn't even part of Tesla. They have other clients.
> if autonomous cars are even just 1 percent safer than humans, we should still be using them
We could improve safety by at least 1% in lots of other ways: lowering speed limits, requiring rear view cameras, raising the licensing age and requiring more frequent tests, cracking down on drunk driving, etc. Building a fully autonomous vehicle seems unnecessarily elaborate to get a 1% safety.
> Building a fully autonomous vehicle seems unnecessarily elaborate to get a 1% safety.
Not if the vehicle also gives you functionality that doesn't currently exist. Freeing humans from having to do the driving has a huge potential value over and above the value of the safety improvement. The other safety improvements you mention don't; in fact they have less net value than the safety improvement, since they also have costs--not the cost of installing the technology, but the cost of getting where you're going more slowly, having to take driving tests more frequently, having to drive your children around till they reach an older age, etc.
Uh yes it does? Even if there's still traffic, being able to sit through it doing some work/reading a book/watching a movie is a huge advantage compared to babysitting the steering wheel the whole time.
> What would have a value is getting rid of the need to be in traffic at all, no matter self or not self driving car you use.
That has value too, but it's a lot less likely than self-driving cars for the foreseeable future. Plus, even if we grant that not having to be in traffic has value, that doesn't mean that not having to do the driving doesn't also have value. They both do.
We have many of those things in Australia (low speed limits and huge fines for doing 3kmh over the limit, automatically sent through mail), but we're seeing this:
Most of our safety improvements have come from better car designs like crumple zones, ABS, air bags.
The more you go against natural human behaviours and start treating humans like machines with very low tolerances, the more it makes sense to just have machines do the job
Australia doesn't have low speed or huge fines. Many places like the city of Sydney doesn't even have red light or speed cameras in the city, it is a joke.
Australia does not take traffic crashes seriously at all.
Sydney absolutely has red light cameras in the city.
But to be fair given the level of congestion and quality of the roads I doubt the point of speed cameras at all within the CBD. It's purely revenue raising.
And strongly disagree that Australia doesn't take crashes seriously. We are one of the most regulated societies in the world.
For the past decade you can find a lot of Renault with a speed limiter. Nobody mentioned it here because they probably don't know of it but it's really cool.
Like cruise control you set it up, for instance to a maximum speed of 50km/h. Then your car won't go over the speed, unless you decide to override it.
I find the design decision they made to let you override the limit very clever : if you press more aggressively on the accelerator pedal, you reach a second max level, which disables the speed limiter.
So you feel like originally you already were on full throttle but you have an option to go even more full throttle.
Driving in the city, you don't have to look at your speedometer and can focus on the road more
>The more you go against natural human behaviours and start treating humans like machines with very low tolerances, the more it makes sense to just have machines do the job...
Isn't the autonomous software systems used in these cars a lot less like a machine and more like a piss poor excuse of a human being? I mean, it is not a machine like the printing press, or a factory assembly line made of robots.
I mean, these systems are "fuzzy" enough to stop them being considered as a machine in the usual sense of the word. So letting the "machine do the job" might not be as good an idea as using a washing machine.
I did a brief search of DUI penalties around the world, and it really does seem like the penalty is light everywhere.
After losing a friend to a DUI driver, I see DUI as being on the order of a manslaughter. I don't want the 12.5 years in jail that a first offense manslaughter charge carries, but the first offense of a 1 year license suspension seems far too light, and the repeat offense terms are egregiously light.
Texting a driving is worse. I don't understand why so many people rail against people driving after a few drinks but will then play with their phone while driving.
Would you be for making the punishment for texting and driving worse than a DUI? It should be if it's relative to the harm it causes.
So if we're well into fining based on probability of causing loss of life due to distractions, can we also fine parents with young children (screaming, fighting, vomiting, etc all distract)? How about fining people on a recurring basis if they have a slower reaction time than some threshold?
I'm only partially being sarcastic. I'm genuinely curious how these punishments for being impaired scale up. I would rather have someone drive by me after 4 drinks who is not tired rather than a mother running on 2 hours asleep with a screaming baby (sleep deprivation is also as bad as drunk driving). Yet the former would lose his/her license at a checkpoint for a year and face a $10k fine, which you say is too little. But the latter would be waved on through.
It all seems very arbitrary and detached from the risk imposed and rather attached to the societal naughtiness of the activity.
It is balancing risk with need and utility. Parents with young children need to drive places. Nobody needs to drink and drive. Many people think they drive fine after 4 drinks because they are such great drivers, but the evidence says otherwise.
We need a level between Level 4 (the vehicle can operate autonomously in almost all conditions) and Level 5 (no manual control): Level 4.5, wherein the vehicle and operate autonomously in almost all conditions and can identify and safely stop in other conditions.
If there's a severe blizzard and my car can't see the road in front of it and we're in a 4G dead zone and it can't get an accurate enough GPS fix to figure out where it's going, I'd be perfectly happy if it safely pulled over to the side of the road and stopped. Heck, I'd prefer to have it safely pull over rather than trying to continue based on some premise of "this vehicle be able to navigate under all possible conditions".
I think you've misunderstood level 4; it seems to be defined as you suggest 4.5 ought to be: [http://www.sae.org/misc/pdfs/automated_driving.pdf ]; the car has to operate safely after autonomous mode is enabled, even if the human has no further interactions (even if the car requests human intervention).
It seems like the definition of level 4 is a bit ambiguous; your link defines it as operating without human intervention, but only in "some driving modes". I've seen this interpreted as meaning that the vehicle can operate without human intervention as long as the driving mode does not change, but may still require human intervention if the conditions change (e.g., it starts snowing, or you exit the highway).
The key distinction here is that a Level 4 is safe even when the driver does not intervene. If it starts snowing, the car will park by the side of the road if the driver isn't there to pick up the wheel.
Level 3 is when you have ample time before the switchover, and Level 2 is when the driver must be there in a matter of seconds.
I just realized that the Star Wars "you have a 45% chance to death if you proceed to travel" could totally become a reality in hazardous travel conditions.
Heck, there could be live read-out of "probability of accident", "probability of death" etc... once we get to level 4 to 5. Computer asks you whether or not to proceed.
What's the threshold at which the computer should be programmed not to proceed? This moral rabbit hole gets big picture fast.
the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene
The only way to do all aspects of the dynamic driving task when conditions are untenable is to stop.
Exactly! And that's a large part of why I'd feel more comfortable if the car pulled over. (I could imagine a future vehicle pulling over and asking "should I try to continue going slowly, or do you want to wait until conditions improve?")
Pulling over isn't that safe. Tons of police die on the side of highways. I mean it's generally a good fallback but you can't just pull over anywhere if in doubt. Would need to be known safe I think.
Agreed. Level 5 autonomy as described by the article was basically "magic driver that can handle any situation ever." By that definition, a competent human driver who's paying attention in clear weather still doesn't quite qualify. There are plenty of conditions that will cause a human driver to pull over, or (worse) to confidently continue without realising how little control they have over their vehicle (just look up 'snow driving accident' on YouTube).
It doesn't have to be perfect to be better than any human in any situation (including those that require pulling over). I think that's what level 5 is referring to. Otherwise yeah, it's meaningless.
I think part of the difficulty with level 4 is that you're always going to have to accept some error rate in the 'safely stop' behavior, as that simply isn't possible in all situations.
I personally tried to give the Google car "hard miles" by cutting it off in traffic last year. I believe I was being helpful, but I feel bad about scaring the "driver."
Y'know, that's a good point. Flow of traffic on several highways around here is 15mph above the speed limit. People are going to hate self driving cars on those unless the speed limit goes up. Maneuvers to avoid or pass them may well make the highways more dangerous, while human drivers are still in the majority.
> At least one manufacturer is afraid that human drivers will bully self driving cars operating with level two autonomy, so they are taking care that in their level 3 real world trials the cars look identical to conventional models, so that other drivers will not cut them off and take advantage of the heightened safety levels that lead to autonomous vehicle driving more cautiously.
It's refreshing to see an article like this among the many other articles predicting self-driving cars will be here in ~2 years. Volvo has committed to selling level 4 vehicles by the end of this year (in limited locations) - I'd love for them to follow through and see how good their technology really is. I suspect the transition will be far more gradual than most people are willing to admit, and take decades instead of years. Keep in mind, serious research into self-driving cars started in the 80's, and then there was a huge increase in interest in the 2000's because of the DARPA challenges. After that Google took interest, and recently the major automakers have become serious about it as well. But it's worth noting that originally Chris Urmson had predicted Google would be selling fully autonomous cars today, and instead Google lost a lot of their talent in the last year including Urmson and there is no public deadline for selling a product.
I think progress is great and self-driving cars are awesome, but I don't believe they're just around the corner.
My great worry with self-driving cars is that they shouldn't be right around the corner, but they might be anyways. Marketing departments might move faster than engineering, smooth-talking the buyers into taking uninformed risks in order to expand the company's training data.
I know they can deal with the risks, but what about the "informed" part? My pet peeve with Tesla's marketing dept is that there are tons of Tesla drivers who seem to think the autopilot is more capable than it is: https://youtu.be/qnZHRupjl5E Those people are surely not aware of the risks they are taking, so I'd bet even the more responsible users aren't aware of its limits.
OMG that video. I thought people learned their lesson after the FL man's death. The culture is even worse now!
Cars led to more deaths than people had on horseback. I'm starting to feel autonomous vehicles will be the same sort of innovation. More elderly/younger people could use them, and ultimately more deaths will result as people's carelessness increases.
Tesla is still cashing in while resisting safety features like requiring hands on the wheel. Meanwhile, accidents like the death in Hong Kong go mostly unnoticed in the US.
I'm surprised the NHTSA doesn't have a rule saying auto-pilot should be 100% autonomous, in control, and capable of driving in the current conditions, or it should pull over, turn itself off, and put the car in manual control mode. Counting on drivers to understand when auto-pilot can and can't handle driving conditions seems like a completely stupid idea and a recipe for disaster.
Holy crap. In some sense it is really surprising there's only been one fatality. Just as well tomorrow there might be 10 more if people really do such stuff...
There have been more than just the one fatality. I know of at least one other, but who knows if that's the extent. And there have probably been plenty of crashes and injuries.
> There have been more than just the one fatality. I know of at least one other, but who knows if that's the extent. And there have probably been plenty of crashes and injuries.
Yup. The Tesla accident in Hong Kong (or Heibei province?) [1] seems like another likely case. We only know about that because the driver had a dashcam that saved the video, and the parents found a lawyer to sue. Tesla claims they don't have data on whether or not autopilot was active at the time of the crash. Yet the dashcam survived..
Tesla and others need some regulation with teeth. They should be required to have black-box style data recorders. They should also be required to report accidents involving autopilot.
Currently, car companies are not required to report accident rates involving autonomous-driving modes in most states. I think only California and a couple others require it.
> Because of the damage caused by the collision, the car was physically incapable of transmitting log data to our servers and we therefore have no way of knowing whether or not Autopilot was engaged at the time of the crash.
Well that is even more troubling. Looking past the suspicious lack of logs from the car in the incident, it should be a rock hard lose-your-business-license requirement for any kind of autonomy in cars, that the log data is preserved in accidents. I can imagine very few scenarios where a black box type of device could not be salvaged from a wrecked car.
EDIT: From the wording, it seems like it may actually have been the 'transmitting' of the data that was impossible. That would make things even worse, if Tesla could not even be bothered to requisition the car data (through legal means if necessary) in a case where a bug in their software may have killed someone.
Its interesting, I vaguely remember reading a report I think by the NHSTA that discusses how, despite the ability for accidents by users not using the features properly, these features still result in a net gain in safety
> Given that self driving cars are a potential job creator
This is the first time I’ve seen someone refer to automated cars as a job creator. It’s almost always presented as the thing that will kill the trucking and taxi jobs by the millions.
I literally just came to HN after browsing slate.com for a bit, currently on their front page is an article about a railroad company that sent 86% of their lobbying money to Trump and GOP PACs this last election cycle. Half of this company's revenue is based in pesos and susceptible to dollar/peso fluctuations and its core business is related to moving goods back and forth between the US and Mexico. Trump took dead aim at free trade, and NAFTA in particular on the campaign trail. Self interest seems to play no part in who people (and companies, thank you Citizens United) choose to support.
What a horrible waste of work that would be. In the sense that the energy and time taken from other road uses just for a little advertising would be obnoxious.
Adverts only continue to exist because they ultimately find the sweetspot between inconvenience and payoff. A road mobile billboard is inconveniencing everyone and the payoff is nil. In contrast to a YouTube advert where the payoff is the youtube video, a mobile billboard has no payoff.
So like the trucks going up and down the Las Vegas strip, but hundreds and hundreds of them . Moving billboards everywhere. Unintended consequences indeed.
I disagree. Level 4 does not require driver attention. That is still a long way off.
From Wikipedia [1]:
Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decision.
By my reading of the article, stale2002 is correct. Pratt is clearly talking about the difference between Level 4 and Level 5 in his first two answers. Level 4 doesn't require driver attention, it just requires the driver to be (instantly?) available if the system has a problem, and this change-over is the situation that Pratt worries about in his fourth answer.
(NB: When he says "The article was talking about level 5", it's clear from context that stale2002 means that the article was talking about level 5 in terms of technological difficulty and timelines.)
I don't think so, mostly because I don't think a qualifying system would actually require an instant switch over. I can't find anywhere that it the timescale is defined; if I had to guess, I'd say it's defined in terms of whatever the minimum safe timescale is (empirically), i.e., Level 4 is whatever it needs to be to allow a human to read a book safely, but not necessarily sleep.
For instance, a Level 4 car probably can't deal with unexpected road construction, and might simply bring the car to a halt, smoothly but quickly. It would be dangerous if the driver was asleep and the car stopped dead on the highway for the 30 seconds it takes someone to wakeup and get oriented, but it could be pretty safe if the driver just needs to look up from their movie and grab the wheel.
So basically by your definition, level 5 will never ever happen? Seems a bit useless then TBH.
Also, as pointed out in another comment, Level 4 does require driver changeover, it's just that the car is supposed to safely handle the event of driver not responding to that request (e.g. by stopping in the middle of the road).
> So basically by your definition, level 5 will never ever happen? Seems a bit useless then TBH.
Well, even trains on tracks have humans ready to pull the break if something is wrong up ahead.
Would you ride a train that had an accident caused by something for which its driverless system was not prepared, but that a human conductor could have avoided?
I think it's possible level 5 will happen. There will still be accidents on roads where only driverless vehicles exist. Some people will choose to ride in such vehicles, and some won't.
No, what I'm trying to get at is GP post changed the definition of level 5 from "some cars don't have steering wheels" to "all cars don't have steering wheels". A non-self-driving car will (by definition) always be cheaper than a self-driving car, and some people just like driving or distrust machines. Together with the fact that (especially in the United States of Freedom) governments will never ban cars with steering wheels, that definition of level 5 will never happen.
> A non-self-driving car will (by definition) always be cheaper than a self-driving car
I don't think that's true at all. A self-driving car can save money on not having to have a steering wheel, pedals, hand-brake etc. Possibly even won't require airbags or seat belts if safety is improved massively
non-self driving cars could become expensive luxury cars
Surely you jest; that the cost of the wide array of self driving sensors, the data input subscription and the hardware and software to run the self-driving system is cheaper than a few mass-produced bits of plastic and metal?
Maybe in the future. Especially 'hardware and software to run the self-driving system' will cost a few dollars (small computer, zero amortized cost of software)
> Pratt is clearly talking about the difference between Level 4 and Level 5 in his first two answers
No, Pratt is saying that manufacturers are hyping level 4 as being around the corner, but it is not. Pratt does not think the claims for level 4 are sufficiently backed up:
"...That’s Level 4. And I wouldn’t even stop there: I would ask, “Is that at all times of the day, is it in all weather, is it in all traffic?”
> Level 4 doesn't require driver attention, it just requires the driver to be (instantly?) available if the system has a problem
How exactly would that work? You don't have to be attentive but you must be available to take over?
At any rate, this is not the definition provided by the Society of Automotive Engineers [1], which is the definition NHTSA has adopted for levels of autonomous vehicles.
SAE says for level 4:
"the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene"
In other words, the system could request human intervention, but it will still do something even if the human does not respond.
Wikipedia appears to have either summarized this in different words or has a different version.
> this change-over is the situation that Pratt worries about in his fourth answer.
I wholeheartedly agree with Pratt that "In some ways, the worst case is a car that will need driver intervention once every 200,000 miles..."
> "...That’s Level 4. And I wouldn’t even stop there: I would ask, “Is that at all times of the day, is it in all weather, is it in all traffic?”
But that doesn't actually contradict stale2002 or diminish the enormous potential economic importance by more than (say) a factor of 2. Level 4 systems that are available to use 60% of the time are still revolutionary, and compatible with both stale2002's and Pratt's comments.
> How exactly would that work?
See my comment to function_seven for my speculation. Although I used the Wikipedia definition, I believe everything I said is compatible with the definition from SAE you have quoted. In particular "the system could request human intervention, but it will still do something even if the human does not respond" does not conflict with "it just requires the driver to be available if the system has a problem" because the "something" the system may do is bring the car to a rapid (or even emergency) stop. This can become dangerous if the human can't resume control within a few seconds, but is an easily acceptable risk to occur once every 100k miles as a price of Level 4 autonomy.
stale2002 said the article is talking about level 5, and that level 4 is around the corner.
Yet the article is talking about level 5, 4, and below. It is saying some car companies are overhyping their capability to reach levels 4 and 5.
Pratt feels even level 4 is not right around the corner.
> Level 4 systems that are available to use 60% of the time are still revolutionary
To reach a certain level, you must operate under the definition of that level 100% of the time. You can't be level 4 60% of the time. That's level 3.
I agree it would be revolutionary to be at level 3. Right now no car is there. They're all at level 2, which require human monitoring of the driving situation.
> To reach a certain level, you must operate under the definition of that level 100% of the time. You can't be level 4 60% of the time. That's level 3
No. I'm looking at the SAE definitions, where it says
> A particular vehicle may have multiple driving
automation features such that it could operate at different levels depending upon the feature(s) that are engaged.
clearly indicating that the levels describe modes of operation, not immutable car classifications.
Regardless, this has just become a semantical dispute. stale2002's comment makes the most sense under the interpretation "cars that that can drive at Level 4 60% of the time are just around the corner", and that comment is a valuable counterpoint to the reasonable interpretation of Pratt's interview.
> the levels describe modes of operation, not immutable car classifications
Look at the column where human attention is required ("Monitoring of Driving Environment"). Every car in existence requires human monitoring in every driving mode.
The moment attention is not required, the car company becomes liable. Volvo will release a level 4 car this year where they assume liability, however, that is only for a hundred people in Sweden.
"Around the corner" means widely available, and that's just not the case.
Right now Tesla does not reliably save the data from its accidents. How could a car company assume liability when they can't even save the data?
> Level 4 doesn't require driver attention, it just requires the driver to be (instantly?) available if the system has a problem
Based on my best understanding of the article and the good description on these levels in the CES keynote linked in the article...
This is Level 2, when the driver must be available instantly. Level 3 is when there driver must be available but there's plenty of time (the article mentioned 15 seconds) to swap.
Level 4 should not need any human intervention when engaged. It can only be engaged when it's safe to do (e.g. in a city in good weather conditions) but once it's enabled, it fulfills the task given to it without ever needing to fall back to human hands. You should be able to sit on the back seat drinking beer. Level 4 should be able to safely stop the car and wait for assistance if things go bad without driver intervention.
Level 2 is where the cars out there are now. Level 5 is a long way away.
Google can almost drive a half-year to a year's worth of driving for the average individual without any intervention at all. That is very impressive, and their streak is growing fairly quickly. Of course, that would probably be done in less than a week as a ride-share service, but consider that any intervention may also be predictable beforehand (e.g. weather conditions).
The article was talking about Level 5 as a long way off but it also spent time talking about a lack of consistency in how these levels are applied [1]. Some systems might state they are Level 4 but that's only true for well mapped good weather locations so one might legitimately wonder if such a system is truly Level 4.
The article is mainly about: Level 5 as difficult, inconsistent application of Levels, the fact that rare difficult situations carry most of the valuable information for a learning system, preventing mismatched expectations, untenability of systems which require human attention and vigilance. The final point made was, instead of focusing on self-driving cars, focus on cars which complement human ability. For example, humans are good at theory of mind and inference in rich world models while machines are better at total situational awareness.
I'll add another. Self-Driving cars need models which at the very least, are able to capture uncertainty over predictions. This makes them more robust to glitching catastrophically in parts of the state space that are novel to them.
[1] > Yes, so NHTSA had 1 to 4, and SAE was very smart, they said we’ll take Level 4 and we’ll split it into two things, 4 and 5, and the difference is that Level 5 is everywhere at any time, and 4 is only some places at some times. But otherwise they’re basically the same. The SAE levels are fine, but people keep making mistakes. And so Level 2 systems are often called Level 3, which is wrong. A key thing of Level 3 is that the person does not need to supervise the autonomy. So no guarding of the machine by the human being.
This is a little misleading. Level 4 is full autonomy within certain domains. So actually Level 4 is already here today -- on test campuses. The meaningful question is when is Level 4 coming to cities. Maybe not decades but definitely not "really right around the corner".
I took the liberty to draw a few situations which AI would have likely no clue what to do, even if it detected the danger. I'll be adding some more as every day on the freeway brings me new ideas (in fact I'm editing one right now). Something like a "Winograd Schema for self driving cars":
http://blog.piekniewski.info/2017/01/19/what-would-an-autono...
Nice work! Some of the situations in the drawings imply that the car needs human-level commonsense to perform properly. For certain things like 'fire in a tunnel' or 'tornado crossing the road', there could be sufficient time to warn humans to take control or at least give verbal instructions, while the car is slowing down drastically or even turning away as a safety-first precaution.
The challenge for the autonomous car would be "how to know that it doesn't really know". I wonder if some existing philosophy or theories are applicable for such a purpose in the real world. Does anyone have pointers?
Your artwork is great! I'm not sure #1 would really be a problem as that situation never should occur and is totally preventable (there always should be traffic cones/barricades surrounding the cover, something the car easily could detect). In the rare case it does happen, the utility company would/should be at fault for leaving a manhole open like that.
I can also think of a few ways to prevent number 2 (basically, a combination of GPS + knowing where all intersections occur + road data from thousands of other connected cars = knowledge of where every stop sign is. Certainly not foolproof, but I think ultimately it is a problem that has possible solutions)
The rest of your examples are fantastic though, #3 being quite terrifying actually. They are all very intriguing thought-experiments and I look forward to seeing your future additions!
> They're who's at fault, but you're the one who's dead.
This is what my dad taught me too and why I've since always been cautious both as a driver, bicyclist and pedestrian even when I'm in the right to drive/go. Being right isn't as fun when someone's on their way to the hospital/morgue.
An open manhole will not cause you to veer out of control and die in a horrific accident. At worst, you might have a blowout. But in general, unless you have a very small car, most cars tires are larger in diameter than the diameter of the pothole.
Provided that your tires are normal (and not the thin-sidewall don't-my-ride-look-cool tires with big rims), and in relatively normal balance - and you are going a normal speed - it most likely will drop slightly then bounce off the lip, jarring car forcefully, but likely not doing much damage.
If the tire doesn't blow (it may), then you might get a cracked rim (if alloy) or bent rim (if steel and depending on the force and deformation), and you would probably want to have the tire inspected (because the plys may be compromised in the area from the sharp impact, which could cause premature interior delamination of the tire in the future).
Yes - ideally you want you or your car to avoid an open manhole or large pothole, but in general, its unlikely to be the cause of a serious accident.
A sinkhole (named "Steve"?) just opened up in a highway near Oakland, CA. Initial photos showed a hole a couple feet across. Manhole-sized. Repair crews later found that under the asphalt the sinkhole was large enough to swallow a car. Human drivers recognized that something was amiss and avoided the hole. Would a self-driving car?
Let's put "ability to detect solid road in front of car" near the top of the requirements list. Whether it is a pothole, sinkhole, missing manhole cover, or missing bridge, it needs to be a mandatory requirement. From what I've seen of LIDAR imaging it seems like it is possible.
Hey, I just finished another one. A school bus with a swinging open stop sign in the middle of the freeway.
Anyway, just a food for though, autonomy really requires a lot of "intelligence" and our technology is not quite there to deal with all these bizarre corner cases. I'm glad you like it.
I think such things can be discounted by knowing that motorways never have stop signs (same with the prank stop signs). Of course, there are a lot of things that only cause a change in driving behaviour in very specific contexts. The bus' stop sign probably only applies when the bus is stopped. Similarly, in Germany at least a bus may turn on hazard lights on a bus stop, requiring everyone to drive past only at walking speed. A bus stopped on the shoulder of a motorway with hazard lights on is a different context again and thus should not trigger the same behaviour.
There are a lot of rules and laws and they change from country to country, or in the US' case, even from state to state. Self-driving cars must know these things and react accordingly. So I think the scenarios presented here are just a few (admittedly, more far-fetched than others) more contexts amidst the probably hundreds of others that already have to work correctly for switching safely between city and motorway driving, driving in a living street, observing right of way correctly in all circumstances (roundabouts, weird stuff like four-way stops, signs changing ROW for one intersection, or a stretch of road, lowered kerbs, people exiting a living street even though it's to the right, cars on an on-ramp and perhaps letting them in based on how far the on-ramp still continues, ...).
Stop signs are interesting in any case, since they have a characteristic shape. If we go full autonomous, then snow-covered signs must be correctly observed as well, at which point any octagon shape may be a stop sign (perhaps, again, depending on context). Same with signs that don't reflect well anymore at night.
The real sign SDVs are here will be when infrastructure starts accommodating their needs. Humans aren't really good at driving either, so we've invented a lot of ways to help them and direct their attention. Open manholes are supposed to be marked clearly, because people do miss that stuff. When there's snow on the road hiding lane markings, someone will come and clean it out. Signs are made to be retroreflective. Etc.
So at some point, I suppose the infrastructure (broadly understood - including laws) may be modified to reduce the dependence on cultural context and other things machines are weak at. So for instance, it won't be every sorta-octagonal shape that works as stop sign, it will be required by law to be clearly visible and also have some machine-friendly accommodations, and SDVs will be free to ignore signs without those accommodations.
(Doesn't solve the prank problem, but humans are equally vulnerable to a targeted prank anyway.)
> (Doesn't solve the prank problem, but humans are equally vulnerable to a targeted prank anyway.)
I would hesitate using the word "equally". People are actually quite robust. Particularly the second human in row will certainly not be tricked by the same prank that tricked the first one.
I want to emphasize the word "targeted" I used. Pranks involve an intelligent agent with malicious intent and an attacker's advantage - i.e. prankster is free to exploit any vulnerability of its victim. People have different vulnerabities than machines, but they still have them.
Sure they do, agree with that. But word "equally" suggest the susceptibility is the same. I would actually emphasise the difference. It is much easier to fool a machine than a human, particularly if we have a copy of the machine at hand and tinker with it (see adversarial examples for deep nets). Humans are all different, so we can never expect our "adversarial example" to be 100% certain to work.
> When there's snow on the road hiding lane markings, someone will come and clean it out.
In Nordic countries, you don't see the lane markings for several months, they are under snow and ice. And sometimes, when it melts in April or May, you notice you have been driving on the roadside for a few months :-)
Oh, and you don't see anything at all if you are behind a truck or a bus.
In some parts of some Nordic countries. In my part of my Nordic country we haven't had proper snow for probably about 6 or 7 years and even then it was only for a few weeks.
But more generally. If the snow clearing machines where also driver less then they might be able to run more or them more often and keep the roads clearer.
It's not just a matter of running them often. Where are you going to put all this excess snow? And operating the plows won't be free even without labor.
I'm not sure where did that come from. One fatality per 200 million miles not good? Seriously, there are millions of people travelling every day. And true, there are accidents, but I think this "Humans aren't really good at driving" mantra is not really serious (but frequently repeated recently by PR of some companies).
Attentive humans are extremely good at driving, distracted humans are much worse, but perhaps the technology should focus first on the much easier task of making sure the driver attends. That would probably safe many lives, before we can have a real autonomous car.
>When there's snow on the road hiding lane markings, someone will come and clean it out.
In New England, snow can completely cover the road surface for days or even weeks at a time, and ever-changing piles of snow cover the curbs and parts of the lanes. Humans just choose a path without regard to where the lanes are in the summer. On some roads this turns a four lane road into a two lane road with a lane-width snow pile between the lanes. In a few spots it turns a two lane road into a one lane, with drivers from different directions taking turns.
Yeah, there were a few spots in my neighborhood that they gave up on trying to plow, but those of us with 4WD trucks were able to get through. Not sure how a self driving truck is going to know which snowbanks it can drive through/over and which it can't. Sometimes I couldn't tell until I tried. That was fun!
Now I really wish I had taken more photos that year specifically to illustrate this sort of thing.
I know a few places where the stop sign is > 90% covered by bushes in summertime, and the continuous transversal white line > 90% covered by gravel (and the remaining 10% have worn off). Yet people do stop. Either because they know the place, or because they "feel" there is something suspicious (the comparative size of roads, the angle of the crossroad, the fact that there is a "pile" of gravel there, etc.). Not sure how a car could decide this by itself.
You could discount that by noticing that stop sign generally shouldn't move relative to the ground. But then, you may want to notice when a police car wants to pull you over by waving that thing they wave out of the car's window...
I actually love trying to make computers deal with the real world - it quickly reveals just how goddamn complicated the real world is, and how many things we think as hard and fast are utterly arbitrary.
Right, there are cases where such a moving stop sign would actually be a real stop sign. The problem is, we try to program in the complexity that cannot be anticipated.
That's something what's easy to train. Each of these are a bit country specific like how a road sign is mounted and where, but this is far from a real problem.
LIDAR imaging already scans the road for it's topography. An open manhole would be easily detected (its equivalent to even a particularly deep pothole - un-navigable).
I don't know how Tesla's video only solution would cope, but they're not Level 5 yet.
How would it show up different than a regular pot hole? What if its almost filled with water and the only thing giving away its true nature is its roundness? The point is to explore the limits. If these aren't the limits, then they are likely very close. I'd like people to think of those cases before they jump into their car and start playing cards (like i one of the videos in this thread).
Basic geometry. A pothole of greater then a certain depth, at a known angle from sensor will produce a different distance profile (namely - rather then an expected arc, you'd see a sudden increase in distance, corresponding to a non-planar surface). Depth is inferred from the deviation from the plane of the road-surface around it.
This is not actually a particularly interesting edge case.
The water-case is more interesting, but I would counter by posing the question of how a human would know whether a water-filled pot hole is safe to navigate? A clear water pothole would not impinge the LIDAR, a murky water one might and could fool LIDAR I suppose (I can not find any solid info on this interface).
Of course car RADAR and a visual camera are likely to also be fitted - both of which could identify a water-filled pothole which LIDAR might struggle with (RADAR can penetrate water and find the unknown backscatter, a camera can simply see the murky-water pothole and choose to navigate around it or stop - in both cases a human can't make a better informed decision).
It depends on the context. Humans may make a better decision if there is something else providing the information. Perhaps the manhole cover sits at the curb triggering immediate attention. Perhaps there are cones, only in the wrong place because some drunk guy moved them for fun. Such is reality.
The problem is not so much availability of information (a sensor packed car has a lot more information than a human behind a windshield), but making sense of that information, particularly in those 0.0001% uncommon, strange cases.
Humans can connect evidence (a manhole cover sitting next to a water filled hole) because they generally know how things work (aka common sense knowledge, long standing problem in AI). Humans can infer what happened and predict what could happen. These things are not really available to AI which is these days more like see-react, not see-anticipate-react. I have a specific post on this on my blog:http://blog.piekniewski.info/2016/11/03/reactive-vs-predicti...
The blog has many other posts on limits of todays approach and on some ideas to fix it in the future.
I agree that some of these cases may be challenging to some humans (particularly inattentive), but we want the self driving car to be (much) safer than inattentive driver, so we need to set the bar high.
Clearly if you consider any of these scenarios in isolation we can dream up a solution, but can you do that with literally any adverse scenario? That seems like the point to me.
Yes, that is exactly my point. Solving these particular situations is like typing all the possible Winograd Schema sentences into a chat bot. Certainly possible but does not solve a larger problem.
In reality there will always be a different situation. These situations sit in the statistical long tail (are too infrequent to reliably train, and too frequent to ignore) and vary enormously from case to case. These are just examples.
Except if the first scenario is solvable (fairly trivially) then why should I assume there must exist an unsolvable scenario?
An argument from the vaguery of "humans have context" with regards to driving is a poor one - humans that are driving do not do a good job of analysing context because of reaction times, and do not share a common context they react to similarly.
You haven't made a compelling case, because you've yet to present a compelling example (i.e. one definitely unsolvable by reasonably usable technology on an autonomous platform). The manhole scenario is based on assumption about the operation of LIDAR which simply aren't true and you've had to modify it in a number of ways to try and make it tricky (i.e. when has there ever been a manhole completely full of water?)
I think you're still confused. The point isn't that any of these scenarios don't have solutions. It's that there is a practically infinite number of scenarios like them and it's impossible to plan for each and every one of them.
> In the rare case it does happen, the utility company would/should be at fault for leaving a manhole open like that.
Sometimes, the utility wouldn't be at fault. It has happened in the past the people would steal manhole covers to sell them for scrap (you know, to get money for their next 'fix').
This is less of an issue now, as most scrap companies won't take manhole covers anymore, unless the seller can prove they represent the locality shown on the cover, and that they have such authorization...
re: open manholes - Why would an AI have no clue what to do if it could detect the danger? Drive over the middle, drive around it, or stop. I don't think there is any reason to think AI would be worse at handling the situation than people currently are. See this manhole "catapult" video compilation.
There are lots of places where ambushing drivers to steal them is pretty common. For example, things like throwing rocks from hiding (like a pedestrian overpass) to break some cars' windows and make them stop, and then some accomplices come out and attack. This was really common a few years ago near my home. Now, I think they will just need a stop sign, and do the same as #5.
Not sure why this would be a problem? The car can identify a dangerous road surface and navigate around it.
>Many stop signs.
Is it really so bad if the car does stop at every one? It's inconvenient, but not dangerous. Unless there's someone behind in which case the car should know whether there's a safe stopping distance behind and act accordingly.
>Fire in a tunnel.
This doesn't seem like it would be that difficult to detect. And also, I would expect self-driving cars to have an emergency stop button.
>Tornado.
I would imagine this could be detected as a visual anomaly, but more generally... yeah natural disasters suck.
>Potential car-jacking.
I really don't think this should be the car's responsibility. If there's a serious risk of people with guns ambushing you on a road, then it's not safe and you shouldn't be driving there. How the self-driving car reacts to that is the least of my concerns.
> I really don't think this should be the car's responsibility. If there's a serious risk of people with guns ambushing you on a road, then it's not safe and you shouldn't be driving there. How the self-driving car reacts to that is the least of my concerns.
Well, I think the idea behind this prompt was more about how it would handle humans where there shouldn't be any. I would imagine that with the premise that the car would stop for unexpected pedestrians, a car jacking/mugging of this type should be as simple as simply find a remote road, wait for a car with a preferred target, and then intercept - the car would acquiesce and make a person vulnerable to an attack (as simple as smashed windows with a window breaker and then an attack on the occupant.
The picture has an extreme outlier, but criminals adjust to technology pretty fast, especially when it makes their lives easier. It may not be armed gunmen looking to get you, but if you can be assured you'll get a target to stop, I'm not sure why criminals wouldn't exploit it.
edit: change "attach" to "attack" as I meant it originally
If they just want to steal your car, coming to a stop and abandoning the vehicle is probably the safest thing to do anyway.
But if they actually intend to harm you, you've got bigger problems, and you should probably hire a professional defensive driver instead of expecting consumer AI to support your edge case.
I'm not sure I'm explaiing the scenario well - this isn't a planned attack, it's an opportunity attack where you're the victim based simply on the fact that you happened to be there, much like a mugging.
Assume the following for a moment: It is known that, when presented with pedestrians in an unexpected area, an automated car will plot away around them or will yield until they are no longer in the way.
Supposed that with the condition, you were having your car drive you from downtown to your small suburb that requires going down any generally empty road. (i.e., no one is around because it's late) In the distance, 4 people form a loose barrier that the car can't safely pass through so it triggers the logic to yield to pedestrians. The people are muggers, and they quickly break the windows and proceed to mug.
Yes, it's a very specific scenario, but if such a case of logic exists, how long until such a scenario becomes common place? This isn't asking AI to evaluate and protect people from targeted attacks or inventing paranoid delusions of importance, it's about figuring out how to respond to a fairly simple abuse of an often called for bit of logic in the AI.
What do you think a human would do differently in this scenario? If a normal person is driving home late and sees even just one person standing in the road, they'll probably slow down for them before even thinking about it, just like the car. I think if this were such an effective way to mug people, it would already be a major problem.
And again, you're talking about a car that's surrounded by cameras by design. It would be easy to include a button that immediately starts streaming all camera info to cloud storage (or indeed, just do so by default if mugging were such a huge problem).
Did I say that? I said I don't think they will be the (initial) target market, and thus the cars inability to deal with carjackers will be a non-issue.
> But if they actually intend to harm you, you've got bigger problems, and you should probably hire a professional defensive driver instead of expecting consumer AI to support your edge case.
I mean this is just reality in a lot of places; I don't see how you can just handwave it away. Cars are designed to operate in all kinds of extremes that probably don't apply to your personal situation.
But it's just an example, is the point. We don't have big carjacking rings, but we have plenty of extreme weather, old roads (even dirt roads depending on where you are), and other extreme circumstances that maybe don't matter to somebody who never wants to leave a forty-mile radius of Mountain View.
Even in an idyll you could imagine some novel circumstance created by, say, downed power lines.
This is a method of committing a carjacking that works today, it's not some kind of new exploit. Very few people will intentionally run down a person in the road, as opposed to stopping, because they think something fishy might be going on.
The scenario I'm familiar with involved a bicyclist crashing, or laying down, his bike on a slow stretch of road in an industrial area. His companions would approach the car when it stopped. The deterrent for this is a harsh criminal penalty, not AI.
It never became a commonplace crime, in spite of being relatively simple.
Rather than train automotive AI to handle this case, we should just stipulate that if you live in a godforsaken place that might require you to run over somebody during your drive in order to survive banditry, you should keep your seatbelt fully fastened and remain aware enough to retake control of the vehicle at short notice should it stop. I'm sure some airline has a sign that could be reused for this purpose.
Or, you know, you could just drive yourself. Or pay a driver. Or stop fantasizing about embarrassingly absurd stuff that has nothing to do with the efficacy of automotive autonomy.
The entire point of the article you're discussing is that there are a million bizarre little circumstances like this that the software will probably fail to take into account, not that carjacking specifically is unsurmountable. This stuff isn't "embarrassingly absurd;" it happens daily.
> The entire point of the article you're discussing is that there are a million bizarre little circumstances like this
In defense of the article, which is pretty reasonable, it doesn't mention the ridiculous hijacking example you were harping on. That is quite a unique situation, technically and ethically.
The other examples you gave: bad roads, downed power lines, weather, and fire are all much more reasonable examples, with much more straightforward solutions available. It's essentially obstacle avoidance and exception handling. The article's example of situations involving not having any safe place to stop is even more interesting.
edit: I was referring to TFA, not to the artist who illustrated some stuff on his blog and shared it here. Which was also a fine effort...
> In defense of the article, which is pretty reasonable, it doesn't mention the ridiculous hijacking example you were harping on. That is quite a unique situation, technically and ethically.
It is clearly one of the implied reasons a bunch of armed men would be standing around on the road in that picture. What makes it "ridiculous," exactly?
If you're on a road with armed men with hostile intent, the fact that your autonomous car is unable to offer a solution is absolutely the least of your worries. The unique properties of the armed men on the side of the road problem are not representative of the more general problems vehicle autonomy involves. Take your pick.
Speeding past or turning around are sensible actions a human driver could take that the AI probably would not. The whole point of that example is that the appropriate response to armed guys on the road is not the same in one context as another.
> This doesn't seem like it would be that difficult to detect. And also, I would expect self-driving cars to have an emergency stop button.
That it if the passenger pays enough attention, which they won't. Also note: each one of these cases can be programmed indeed. But that is like fighting Wingrad schema by typing in all possible sentences. In reality none of this will happen, but something else will, which we don't even anticipate.
Assuming you've accounted for these scenarios, yea, everything may be just fine.
I think the point is that there are millions of different things that can go wrong, and many have a 1 in 1000000 occurrence. Human intelligence is able to improvise, but what will cars do?
> Human intelligence is able to improvise, but what will cars do?
Halt, and send a warning signal to any vehicles nearby. Which is better than what we can ordinarily accomplish today in those one in a million occurances.
I live in Arizona and I drive through twisters like that all the time. They are a constant feature in the desert. Along with tumbleweeds blowing along, which disintegrate nicely when hit by your car, but would appear as a large solid mass to the autonomous system and probably freak it out (as it should).
I think the missing piece here is commonsense reasoning (I really like your art work by the way!). There has been a lot of work in commonsense reasoning in symbolic AI but not sure if anyone is working on that in machine learning.
Yup, there is a boatload of low level knowledge missing from any AI that we build. It is painfully visible with robots (and autonomous cars are robots as well), see DARPA challenge video which was then lead BTW by Gill Pratt: https://www.youtube.com/watch?v=g0TaYhjpOfo&ab_channel=IEEES...
A lot of stuff we take for granted such a supporting oneself by a nearby wall when loosing balance, is completely out of reach of todays "AI".
Really cool page. I think the journalists are missing that the remaining 5% of situations to be solved are much harder than the 95% of common place situations.
This is unlikely to cause a problem with a car's direction or motion of travel; at worst case, it may cause a blowout, but tires and rims are surprisingly tough (purposefully). I've hit potholes as large as manholes, and other than being very surprising, no damage was done.
Then again, I drive a pickup truck - I wouldn't expect something much smaller to handle as well.
Still, the rate of speed and balance of the car would all play into the scenario. While it would be better for a car to avoid an open manhole (or pothole for that matter), it generally isn't a crazy scenario if the car hits one, either.
If you want to see and hear about crazy stories of mishaps people have, yet the car continues to be mostly drivable, check out the sub-reddit "Just Rolled Into The Shop":
These are interesting, and clever, and I agree difficult, but I don't think they necessarily need to slow down the progress of self-driving cars much at all.
Tesla's data so far suggests that their current autopilot implementation is reducing crashes by 40% - http://www.theverge.com/2017/1/19/14326258/teslas-crash-rate... - and while these cases are all problematic, they're all fairly rare, and the cases in which they come up and the car reacts wrongly and that's a major problem are going to be even rarer. On top of that, self-driving car performance is only going to improve.
It doesn't really matter (in terms of the value of self-driving cars - there's a mostly independent marketing question) if there's a couple of cases where self-driving cars make the wrong choice and kill you, if there are many thousands of cases where they save your life. It's effectively changed a great many risky situations into some new and different but less likely ones.
If driving a self-driving car significantly increases my odds of survival while driving, in addition to giving me huge amounts of bonus free time, then I'm definitely interested, regardless of risky but rare cases like this.
I always thought that the first real application for level 4 would be highways. Aren't we really close to that? They should be much easier than urban driving.
I hate this rhetoric about "saving even one life." It's closer to "saving five lives but killing four people who otherwise would have lived." Some people say that's the same thing. I disagree.
People accept that humans are imperfect and cause accidents. Accepting that computers will make mistakes and kill people is another story. Especially if the owner of the car is a 1 percenter or a mega corporation and it kills an innocent bystander.
Also, people aren't that accepting of human imperfection and expect to see people have licences taken away and jailed where they've been deemed to be behaving recklessly (even if they've racked up huge numbers of miles with no previous incidents)
Because if your family member is violently killed by a robot car doing something that seems inexplicable to the human mind, and with a fact pattern that doesn't match up with normal human error accidents you're used to, you're not likely to be consoled with a reminder that four people you've never met supposedly are still alive because they weren't killed in some place you've never been by a human driver that never took the wheel.
Human drivers kill more than 30000 people a year in the US alone. It's not about saving "even one life", self-driving cars would have to be pretty terrible to reach those numbers.
So every decision is a trolley-problem style tradeoff. Remember that your null action in this case is "save four lives but kill five people who otherwise would have lived." Where's your cutoff point?
I don't think this is a form of trolley problem. If the decision were going to be made unilaterally that either everyone or no one would use net-positive self-driving cars, then it would be a trolley problem. Actually, this decision is made on a driver-by-driver basis by the driver themselves. If the driver is educated about when it's riskier and when it's safer than their normal driving, it's easy enough to punt the decision to them. Clearly that's not happening though https://youtu.be/qnZHRupjl5E
Yep. If autonomous vehicles become widespread before they are really ready and the news starts filling up with accidents a human would have avoided, it will cause a massive political backlash. The actual statistics will be irrelevant to most people.
Very frustrating how people fail to see the full context. However, it is also useful in this case. The engineering, marketing and CEOs of companies doing autonomous vehicles are aware of the huge negative media potential a single accident has. Thus Management and marketing have an incentive to give engineering the time and resources to do things properly.
Yeah. The question for me is: is that "1% better driver" compared to the average? In that case, it's still going to drive worse than 49% of the driving population. On the other hand, if it's 1% better than the 95th percentile, then yeah, that makes sense as a target to me.
Slightly better than average doesn't sound that great to me.
a genie offers you a deal. "I will prevent all car accidents that cause death for an entire year. In exchange, at the end of that year, I will randomly kill people, one by one, until as many as XX% of the prevented deaths are 'repaid'.
The 1% safety people would take the deal at 99% repaid deaths. But I think most people would only be comfortable with a number much lower.
Well, it's not that equivalent a situation. I'd feel personally responsible for the particular set of individuals killed by the genie rather than some other set of individuals. That's too much responsibility for me, I didn't ask for this crap, I should know better than to trust random genie bargains.
In the least convenient universe (and therefore the most useful for examining this moral quandary), you don't have the luxury of dodging the question. If you choose to tell the genie to do nothing, then you're choosing for X people to die to save Y.
(The scenario needs work, though - it should be Y people chosen from the road users in question, not just Y random humans.)
Uber knows the pickup and destination of every trip before it begins. Which means they know the route before it begins. Which means they can assess route and conditions to see if they are suitable for autonomous cars.
So they roll out slowly, expanding the possible trips covered as their technology advances.
Since they have such fine control over which routes and which conditions the cars will be expected to perform in, they're actually ideally positioned to work with this technology.
It's easy to accept this when you realize that nobody has even managed to make a generally useful voice assistant yet, how ludicrous the "close to self driving" claim is.
I have to agree. Especially a car that can work worldwide.
For example here in Australia we have an animal called a Kangaroo which when a car approaches will move completely randomly. This includes at the last second jumping straight in front of a car. Most drivers from these areas know to slow down and attempt to drive in the middle of the road if at all possible.
Surely weird situations like this which require local knowledge exist all across the world. But I've yet to see any acknowledgement that this is useful.
>But I've yet to see any acknowledgement that this is useful.
I think that the reason why this isn't acknowledged (by people who should know) is likely to be a political one.
Examining the solution that I typically seen given for this problem should explain why.
First, the assumption is that driving algorithms are updated to account for any edge case (for instance, an unpredictable animal jumped out this time) after it occurs, so that case is then guaranteed to not re-occur.
This presents a problem: anyone who's ever worked with software before knows that that's an impossibility until a sufficient number of incidents happen in nearly identical ways, or until a fix is manually applied (which itself could take a very long time).
But we fall back on the second part of our argument, statistics. The fact that you're going to crash into something you wouldn't before is outweighed by more than one other person not crashing in a place where the computer can predict and avoid a crash more effectively than a human.
This too, presents a problem, if you also consider the implication that the availability of manual-driving cars will likely be degraded in some way after the introduction of self-driving ones (from having increased cost/regulation/insurance premiums or simply being banned entirely).
If we discount these "weird" situations and fail to make allowances for them, people will get hurt where they wouldn't otherwise (even if the local or global sum of deaths is reduced). The benefits skew in favor of cities where traffic behavior is/will be much more consistent.
So it's best if local knowledge can't be useful- why complicate the matter or feed skeptics talking points if you don't need to?
I made a bet with someone recently that within 10 years you will not be able to buy a self driving car off the lot that can drive someone around that doesn't have a licence.
It's a testament to Tesla's propaganda that he took the bet. I look forward to claiming my $500 in 10 years time.
It's a bad bet because Tesla doesn't sell out of lots, and Google won't either.
If you reformulate the bet as "it will be possible to legally ride in a driverless cab without a license somewhere in the world by the end of 2027" I'll take the bet for $20.
I think you might win your original bet on the technicality of car sales lots being averse to selling these kinds of things and ride sharing services being early movers, and more amenable to insurance. Being able to keep cars in a specific service area will help.
Approximately 35,000 people died in car accidents in 2015. That's almost 100 people each day. Imagine you had self-driving car that was exactly as good as humans. This car will kill 100 people each day despite of being human-level competent. It has slim chance of surviving public outrage and lawsuits. This points to the fact that self-driving cars needs to be few thousand times better than humans. As in all machine learning problems, as you get along the curve, progress becomes much more expensive for next 1% improvement than last 2% improvement.
One way to circumvent this is establish special lanes for self-driving cars where things are much more controlled, well defined and cars in that lane can communicate with each other to avoid crashes. Long segments of highways might be great candidates. This can heat up the virtuous cycle where people buy self-driving cars to be in that lane which pressures authorities to make more lanes available for them and eventually most lanes are for self-driving cars.
You don't need autocars to be 1000 times better than humans. You just need them to be equally good as the average driver ALL THE TIME, such that they're never distracted, sleepy, driving unlawfully, etc. The occasional lapse in driver competence is the cause for the vast majority of crashes. I suspect the limitations of driver (or autocar) perception plays a small role in most accidents when compared to lapses.
As residents of a developed country, with a very low death rate per mile driven, we also see traffic risks very differently than the world's norm. The annual death rate per 100k motor vehicles in the US is 13; in India it's 130 in Africa it's 574. In the more unsafe countries, I suspect driver error is the cause of 99% of accidents. Replacing human driver decisions with autocars would reduce worldwide driver fatality risk enormously, reducing perhaps 90% of global traffic fatalities -- about one million people a year.
This is Toyota, which is coming from behind here. They started work on self-driving last summer, which was late to be getting into it. They hired Gill Pratt, who took over the MIT Leg Lab after Raibert left, but never did much with it. His job was to start to build up an organization to work on self-driving. So he has to make excuses for Toyota being behind.
Toyota recent technology direction isn't looking good. Instead of selling battery electric cars, they're selling hydrogen fuel cell cars in California.[1] Nobody is buying. Their electric cars are mini-cars. Something went wrong over there.
Pratt has some legitimate criticisms. But many of the really hard but rare problems can be solved by stopping, or going really slow. When you're going really slow, your sensor data from LIDAR is very good and you should have a full ground profile. If you have to inch your way through a field of rocks, that can be done. Remember, the DARPA Grand Challenge cars of 10 years ago could drive off road.
As for "why did it do that" issues, that's mostly a problem for those self-driving systems where machine learning is connected directly between camera and steering wheel. Those are easy to build and give the illusion of working, but are not going to work in hard cases. You want a world model and object recognition, like Google. You can tell how well your object recognition is working by checking its results at long range against its results at short range.
They're waiting because they'll copy the right technology (they're basically saying they won't offer anything less than Stage 5 which is smart) from competitors when it happens and becomes economically feasible. I don't think they're going to run out to make Corollas with 20K LIDARs slapped on just yet. Give it time. Tesla likes to brag, but honestly a little steering now and then doesn't count. With this tech, I think all-or-nothing is the safest approach given you're not a research power-house like Google. Let someone else stumble upon the solution then copy it.
If you go slow in low visibility on the freeway, you are doing the correct thing. Driving too fast for the visibility is reckless and will get you a ticket for driving incorrectly for the conditions (assuming you don't get yourself or someone else killed).
When it's raining on the freeway, people in the faster lanes will slow down... to around 50-60 mph (instead of 70-80 mph). I don't think 50-60 mph is what Animats had in mind, though.
Op I believe means panic button full stop , pull over. As others have pointed out that's not possible in many places. Now those living the self driving car is easy dream will just say we have to make the roads safe for self driving cars, how many of you have lived through major highway construction in a city? And the political infighting and the eminent domain squabbles? I did in st. Louis, it took 5 years to agree on a half assed plan to add 1 lane each way to highway 40.
Or are you just gonna strap an extra lane onto the Brooklyn bridge cause self driving cars have to have it?
Also as the article pointed out human vision can see further than lidar esp in rain so what is a "comfortable speed for an entity with lidar vision is likely to get rear ended by an entity with human vision.
Agreed about Toyota's puzzling lack of sense in pushing hydrogen over electricity. I guess they've completely given up the game - they were innovators with the 2004 Prius (still a decent car), but have stagnated since then.
Ha-le-freaking-lujah! I've been saying the same for years, ever since I completed the Udacity self-driving car AI course (taught by Sebastian Thrun, then head guru of autonomous vehicle research at Google). 70% of the problem was solved in mid-90's, by Ernst Dickmanns et al, Daimler Benz had autonomous cars on the autobahn. After billions of dollars and two decades, we solved another 20% of the problem. The remaining 10% is not going to be conclusively solved in the next 20 years at least, not under the current constraints.
Now if roads are outfitted with instrumentation for autonomous vehicles, _and_ human drivers are prohibited on such roads, _then_ we _might_ see full autonomy. But not before.
If it were up to me, I'd focus on this instrumentation instead: RF guide wires/tags for car localization on the pavement, machine readable signage (even in fog, snow, and heavy rain -- conditions with which cameras and LIDARs cannot deal in principle), inter-car coordination mesh networks and security thereof, autonomous vehicle readable road work signage, police gear to direct traffic of autonomous vehicles, and so on and so forth. There. Hundred billion dollars worth of startups in one paragraph.
As things stand, you can only be fully autonomous at 25mph in California where it never rains, as long as there's no fog, no road work, and no one has messed up the markings on the pavement.
> police gear to direct traffic of autonomous vehicles
Absolutely, been saying this for years (yes, me too :). We will eventually achieve a lot more autonomy and robotization of driving, but it will require massive infrastructural changes, much like the transition from horses to cars.
The current generation of self-driving cars is fairly impressive already, but what I'd like to see is a city full of them. I have a feeling that self-driving only works as long as the majority of drivers are human drivers. It's trivially easy to come up with traffic situations that could lead to a deadlock by blidndly following the rules.
Humans will use hand signals, eye contact or someone will violate the rules a little or make way when they don't strictly have to in order to ensure that traffic flows.
I think that car-to-car and/or car-to-road communications will be required before large scale deployment is possible. And I have not heard from a cross-manufacturer effort of creating a protocol for such communication.
Although I do understand why the automotive industry is hell bent on getting their level 2 autonomy out there. Money from the customers is needed to keep the R&D effort going.
I personally did not understand why anyone would want a Level 2 car where you have to be constantly on the lookout until I visited Silicon Valley and drove a stint on US Hwy 101 in rush hour. And I guess this is the initial target market for the self-driving car industry: wealthy individuals who have a stressful morning commute in stop'n'go traffic. Money from these early adopters will go to funding the R&D for the next generation in the hopes that Level 4/5 will some day become reality.
But in my conservative estimate, that's still years away from being adopted en masse. There may be a significant minority of them on the road in 3-5 years but I can't imagine it working very well if they were in the majority.
At the moment I'd pay a lot of money for a car that I drive mostly myself, but that also had an advanced safety system which makes it much harder to get into an accident. I.e. fix my driving mistakes and lapses in judgment for me while I'm in control. I'd be willing to pay a hefty premium for this.
It's been a while since I've looked at the actual numbers, but from what I recall (the point could be made over a reasonable approximation of X), >80% of the spoken words in an average english speaking adult's spoken language corpus exist in the vocabulary of the average english speaking kindergartner.
In other words, it takes us two-ish years to learn how to use our vocal cords, and another two to three years to get to 80% of an adult's vocabulary. And yet it takes another 12 years just to increase our vocabularies a handful of percent and to become fairly proficient at piecing together those words into coherent and mature enough communication for full time employment.
And that is just one example. Learn 90% of Haskell in one afternoon...learn the rest over the next two decades of your life. Learn 90% of derivatives trading from one book, but spend the rest of your life learning the rest. They aren't examples so much as they are a fact of life: There is an extremely long tail to learning, and extrapolation of where you will be given how fast you've learned up to some arbitrary point will be impossible.
Self driving cars aren't just learning the rules of the road. They are learning human spatial sensory perception and fast heuristics for ad hoc path planning that have been evolved over several millenia. And yes, they are becoming extremely capable extremely quickly...but you won't be able to extrapolate linearly to get to a point where they can take over.
I'm not sure that's right. Googling around suggests that the average 5 year old knows somewhere between 3000 and 10000 words, and the average adult knows somewhere between 15000-20000 words. So I think saying a five year old has even 50% of an adult's total vocabulary is probably generous. Of course, if you meant that a five year old would understand about 80% of the words spoken by an adult over the course of a day, 80% sounds like a very conservative guess.
Not that it affects your point at all, I'm just thinking about words now. And while I'm on the subject, XKCD's "Thing Explainer," a book on how things work using only the 1000 most common words, is great fun.
Way back I remember when William Shatner's TekWar was brought to TV. One scene that stood out was when the lead was driving into town, it appeared that limited access roads; US interstate type; were done by car driving itself.
a similar idea would do quite a bit to move the technology forward and increase acceptance. we already have projects that create toll and hov lanes. why can't these also be equipped to assist self driving cars with their tasks?
While I think the Tesla videos were cool, even GM did something similar recently in San Francisco, they are still pretty much scripted events. Tesla is guilty of exploiting the over trusting side of the issue simply by product naming but some of their demos could lead people into assuming far too much ability; will that become a legal liability?
I love the idea of self driving cars but it scares me that no one seems to remember the history of the 5th Generation project (I was a kid in the 80s) and other AI hype-dreams over the years. It has always been the case that approximate solutions can be found for 85% of cases, which has given the illusion of a nearly solved problem.
There is a lot of hype regarding deep learning, but I have struggled to find a concise definition apart from the fact that it is now relatively easy to work with monstrously big networks. Backprop and related algorithms have been around for decades. From what I remember of neural nets, one huge drawback was that they would be close to un-debuggable. The learning contained in the net would be inscrutable to a human, to all intents and purposes. Failure data could be recorded and replayed, but any actual reason for failure would frequently not be found. So, tweak the network, resize some layers and try again... I can think of several reasons why that's fundamentally unsuitable to the problem of driving.
I was in Egypt recently, and the sheer amount of lane crossing, merging, pedestrians ducking through multiple lanes of traffic, roadside obstacles, donkey-drawn vehicles etc would be 100% impervious to even a level 3 solution today. I believe the same would be true in India and many other parts of Africa.
So, I really hope that we are not falling blindly into another 5th Generation sinkhole here. History should have taught us better.
It's refreshing to get a perspective that scopes the true scale of the problem and addresses technical, societal and moral issues.
This seems to be a more responsible and comprehensive engineering perspective than the frenzy and hand waving that accompanies most self driving topics here.
Overestimating AI with little to no data and vastly underestimating not only human driving but the varied traffic conditions they operate in doesn't seem like a realistic or responsible way to solve the problem.
Everyone is discussing about how autonomous cars will react to various situations. However, the real question is - once we have walled gardens, who'll control it - the manufacturers or the government?
By that I mean, all automated cars (especially level 5) would need some kind of network connection (both cellular and mesh) - which means "your" car (if you own one - which from the looks of how things are shaping is going to be highly unlikely), could easily be stopped for variety of reasons with or without your permissions.
It could be hacked by 3 letter agencies, or by hackers who can than case mass disruption.
While I'm not against self driving cars, I'm very cautious against the "self-driving are the best - they'll reduce death on the road and if you don't support it you are a death loving luddite who can't get on with the times" school of thought.
We saw what happened with we gave up control on our phones, and that's slowly happening to our computers (sure, most of readers here can run/use linux, but linux on desktops still has a much lower market share compared to Windows 10).
I just feel like it's not the right future that I imagined as a kid watching star trek and it's disappointing.
EDIT: another unrelated point I'd like to add is the fact that if we want to have autonomous cars on the road - we'd need co-operation between various companies (which would be easy - we have browser vendors do that, why not car companies?), but more importantly, we need a huge overhaul of our infrastructure for autonomous cars and we need to outlaw humans driving cars on such roads).
I found this quote from Dr. Pratt particularly salient, "But fundamentally a thing your readers should know is that this really comes down to testing. Deep learning is wonderful, but deep learning doesn’t guarantee that over the entire space of possible inputs, the behavior will be correct. Ensuring that that’s true is very, very hard to do."
Is this the same Gil (with one l) Pratt of the MIT Leg Lab? (Not to be confused with the MIT Lego Lab! ;)
Are they different people, or did he change the spelling of his name when he moved from legs to wheels? Or is he developing walking cars for Toyota?
Actually his first name is spelled two different ways on this one page, and he looks like the same person, so maybe he changes the spelling of his name frequently:
http://images.sciencesource.com/preview/BA4147.html
reply