I'm sure he's right, but then why is everyone joining the race now? Wouldn't it be way less expensive to wait until at least some of the problems are solved?
Also, it sure seems like level 5 autonomous driving is just a few steps short of general AI, which is still lightyears away.
Not sure how waiting would help. The solution is primarily software, and I'm not sure how one company can learn from its competitors' private code. The only way to get there is to start along that long road.
People are joining because Tesla is scaring them into it. Musk is a dumbass, but his idea self-driving is happening now may become a self-fulfilling prophecy.
Consumers LOVE it. So even having some level of driving assistance will win you more customers.
"Musk is a dumbass, but his idea self-driving is happening now may become a self-fulfilling prophecy."
If you thought about it for like 5 seconds, you just contradicted yourself here.
But your latter point is a good one. Part of what makes Musk effective is, even if he starts with just a half-baked kernel of an idea (usually borrowed from somewhere else), hyping into existence whole new industries (beyond just the companies he controls), which Musk can thus harvest for recruiting and better ideas, which makes those once-silly ideas actually work. That's kind of the opposite of being a dumbass.
People are jumping in because they get attention and money for doing so (and because lots of non-experts who tend to be the money-distributors believe the hype) including government grants. The real threat is not that the hype will waste private investors' money (although it is) but that it will draw government funds away from transit projects that would be a far better investment.
>> draw government funds away from transit projects that would be a far better investment.
Not just government funds, but also car maker funds. I honestly don't understand why all the research in automated driving doesn't focus on making better versions of the driver assists we already have first, instead of shooting for full autonomy, which I honestly believe to be a pipe dream.
Of course many of the advances in researching autonomous driving will trickle down to safety systems for 'normal' cars, so that's a good thing. I just feel like everyone is approaching the problem from the wrong direction, wasting a lot of R&D effort that could have produced much better results much sooner if car companies would start from what we already have and incrementally improve it. You can blame car companies for being slow to adopt many things that could benefit all of us (such as electric drivetrains), but you can hardly deny car safety has already improved tremendously in a relatively short timespan. Whether they were forced by regulation or because they thought making safer cars would be more profitable doesn't really matter.
> but that it will draw government funds away from transit projects that would be a far better investment.
Here in the United States, such funds are meager and almost non-existent. That is not likely to ever change anytime soon.
Even in the areas where such funds are allocated, most people do not want to take public transit, for a variety of different reasons. If you have the ability to own a car, using it for transportation and other tasks tends to be preferable than other means. You have more flexibility in where you go, when you go, and how fast you'll get there. You are dependent on a system outside of your control, or other people to transport you.
because the industry now at least sees the path ahead to full autonomy. if we didn't think it was possible or feasible before, now we see that we CAN in fact achieve this, even if it's a long road in front of us.
It seems like you could have a hybrid where the vehicle operates at level 5 with augmented infrastructure and lower levels for everything else. Add a 10-20% upcharge to every resurfacing/construction contract to embed sensors and whatnot and over the course of the next 20 years you get coverage for 90% of the miles that people travel on a daily basis.
I think it is because there is also a lot of value of just assisting the driver.
It is much more comfortable to drive if the car itself keeps the same speed as the car in front of me and follows the road. I also feel a lot safer if my car would automatically brake if I'm distracted and a kid runs out in front of the car.
No car maker dares not be in the race. There is the very real risk that somebody in the race will release an autonomous car, and in just a few months(worst case, but it is believable) governments will see those car is significantly safer and mandate it on all cars. If that happens any manufacture not close to finished won't exist anymore.
Note that being in the race just means they have a plan in place. Car makers partner all the time - it is common to see the same car from several different makes with only the logo changed, or an optional engine from someone else.
I expect that in the end there will be about 5 different self driving systems on the road that are shared between different cars. Designing a system is complex and expensive on the one hand; and not a differentiator that customers will pay for on the other hand: so it won't be worth the cost to continue to develop a system if you have it when you can buy from someone else. (though each might customize the UI as the UI is a feature they can sell)
One thing I would love someone to explain is how can people assume cars are going to be automated soon when both planes and trains haven't been automated for various reasons. They are different issues I have gathered but essentially shouldn't trains be dead simple but to this day at least the trains(not metras though) often have someone to stop the train in case of emergencies. Planes are more complex but have lower crash ratings and less obstacles but can't stop and still have 2 plus pilots.
vehicular traffic is a cluster.. esp with the variables involving non autonomous cars, non-vehicular traffic... random objects that end up in the road compounding the complexities of all other variables.
Some trains are very automated. And there is autopilot/autolanding for jets, BUT the pilot has to engage them, and they can't be used in all circumstances. That is to say that they are about as automated as the Tesla Autopilot. Once you get on the highway, the car can keep course and maintain the appropriate speed.
This is a long way off from the world of "Enter the destination from my garage, take a nap, and wake up an hour later at my destination," of "Full Autonomy." There is a distinct inflection point of utility where Autonomous vehicles require you to be paying attention to take over in emergency, and the point at which that is no longer required.
There's a fundamental difference between the two. Airplane autopilots are basically scripted, and operate fully within the parameters set by the pilot. There is almost no intelligence, and very little adaptivity there. The pilot needs to reprogram the autopilot manually when the flight plan changes, for example. Flight conditions can change in many ways that require the autopilot to be reprogrammed, reconfigured, or even switched off. Airplanes have actually crashed more than once because the pilot configured the autopilot incorrectly.
I don't know much about trains, except that fully automating without adapting all the rail infrastructure is more difficult than you would imagine, for example because there is basically no standardization in how semaphores are placed, for example.
FAA flight plan plus an IFR clearance is basically a script. Before autopilots, the pilot followed that script, without radar, with waypoint and next fix ETA reporting by radio. And it has a fallback in place for lost communications, the flight is expected to fly and land at its destination per it's clearance (or most recently amended version of it). And the fallback also includes the possibility of being unable to land at the destination, and instead having to go to the specified alternate in the flight plan.
http://delta.jepptech.com/jifp/help/Images/FAAFlightPlan.gif
There are no good technical reasons why trains are not automated more often. The technology is there, many trains do in fact run autonomously. It's just unpopular for long distance trains because current systems don't do obstacle detection and its hard to secure long distance tracks against obstacles. But for high speed trains at least there is no point in trying to visually recognize obstacles: if you see an obstacle it's already too late to brake meaningfully.
>It's just unpopular for long distance trains because current systems don't do obstacle detection and its hard to secure long distance tracks against obstacles.
That doesn't sound like it's just because "unpopular", in fact, that sounds like a technical reason. It sounds like you're really saying "they are limited, but people are bad too, so it shouldn't matter."
30,000 people in the USA alone are killed each year because of human driving error. Humans are not getting any better at driving. Car safety mechanisms have reduced road death rates significantly in the last 50 years, but still - 30,000 deaths/year in the USA alone due to human driver error.
Cars will be automated because cars have to be automated.
The general trend from that data appears that the higher the standard of living, the higher the safety is per driving mile and the US is where one would expect in such a list, better than third world countries, worse than Europe.
Good point. Probably no single simple correlation will do - higher fuel prices points to the one thing that Western Europe and Canada share off the top of my head compared to the USA. (That and nationalized healthcare. :) )
If raising gas taxes would save 12,000 lives per year then America should definitely do it. It's a social problem that doesn't require technological solutions.
Self driving advocates having trivialized human drivers have now realized the technology is not quite there and have changed tack to demonizing human drivers and scare stories and FUD.
These are self interested business people. There are real issues around self driving cars like safety, autonomy, rent seeking, corporate control, individual freedom and surveillance that self interested tunnel vision advocates simply cannot comprehend.
About the same number of people die of influenza each year. We've been dying in these numbers for a long time, and it really doesn't matter at a macro scale. It's fine to try and do better, there are some positives for trying, but this is more of a want, not a need.
Those are mostly the elderly and babies who have a high risk of dieing anyway. Elderly people especially are certain to die of something soon. If it's not the flu, it'll be a cold or an infection or whatever.
> One thing I would love someone to explain is how can people assume cars are going to be automated soon when both planes and trains haven't been automated for various reasons.
Planes are mostly automated, except for liftoff and landing. They can probably automate those too ... but there's no point. You'll still need a pilot. Would you get into a plane flown entirely by computer with no pilot available to take over?
Trains are entirely automated. At least, the ones where there isn't a union enforcing some form of employment. I've been on a number of trains in airports that don't have any kind of operator.
The price of a plane/train is astronomical compared to what you have to pay for a pilot. I mean, you could literally buy a pilot for a millenia for the price of a jetliner (100M$ for jetliner / 100k$/year for pilot). There just isn't a very large economical incentive to automate that job out.
It's not one pilot, though. It's two pilots, times two shifts a day, plus some extras to cover vacation, sickness, etc.
There was enough pressure to engineer away flight engineers, navigators, and radio operators in cockpits. I wouldn't expect them to pass on potential savings in the low-margin world of airlines.
And still, the pilot portion of the cost of operation for planes is in the single digits. You barely scrape into the double digits (maybe 20% MAX, closer to 10 I think) if you count all labor.
> "With a net profit margin of just 2.4%, airlines only retain $5.42 per passenger carried," said Tony Tyler CEO of International Air Transport Association (IATA) at the group's 70th AGM in Doha, Qatar.
I suspect they're interested in single-digit cost reductions.
There's no good reason to automate trains right now, aside for the sake of automating them. It would take a lot of cash to accomplish and it would put us exactly where we are now.
Fairly good reasons for passenger transport, actually. Transport authorities are reporting safer and smoother (which directly translates to "cheaper", both in power savings from optimal profile and in fewer scheduling delays) operation. For heavy cargo without a highly regular, high turnaround schedule, not so much: too many potential operating profiles.
Here's a reason: If trains were automated, low-demand lines with long headway (i.e. 2 hours and more) could be serviced more frequently (with proportionally shorter trains) with about the same amount of fuel usage. A shorter headway will make the line more attractive and increase passenger numbers.
Right, but it could cost 100's of millions of dollars to support that. You might need to buy new train engines. You might need to add more signalling, or upgrade existing signalling. You might need to build bridges over/tunnel under the tracks to avoid crossings with roads. Your staff need to be retrained or you need to bring in new people to support the new system.
All of that work to support a small rail line? It might never pay off.
1. There are plenty existing train systems where large trains are formed by coupling together multiple small trains that all have their own engines. That way, train lengths can be varied to optimize for demand. This wouldn't be happening if it were prohibitively expensive.
2. Why would you need to avoid crossings with roads just because it's autonomous? Trains use signals that only turn green when the section ahead is clear of other trains or vehicles. It's not like train conductors check road crossings by sight to decide whether to drive or brake.
Consider the proportion of the cost of an Uber ride that goes to the driver compared with the proportion of the cost of an airplane ticket that goes to the pilot. A driverless car has the potential to be much cheaper, but a pilotless plane would not be much different. Many new train systems are automated. The cost of retrofitting older systems is just too great.
With regards to airplanes, you have several issues. The main one being that airplanes are not equipped with sensors capable of detecting other aircraft nearby. Without a complete picture of the airspace to a very significant distance a fully autonomous aircraft isn't possible. Today that is managed by the air traffic control system which is entirely run by people communicating with other people. That said, aircraft are capable of fully autonomous landings - https://en.wikipedia.org/wiki/Autoland.
> The main one being that airplanes are not equipped with sensors capable of detecting other aircraft nearby.
Well, there's ACAS/TCAS (airborne collision avoidance system / traffic collision avoidance system) and certainly we are moving towards ASAS (airborne separation assurance system), based mainly in transponder / ADS-B radio signals. But as others have commented, there are other reasons (psychological, economic, etc.).
If we had autonomous planes it wouldn't significantly reduce cost or increase usage. But there is a lot of investment in autonomous drones.
Planes also have a very high safety bar to begin with, which cars do not.
Same thing about trains, there's very little value in doing so, and resistance in the form of unions in some locations. Some trains are fully automated though, here's an article with some more info: https://motherboard.vice.com/en_us/article/wnj75z/why-dont-w...
Frankly having some dude from Toyota, who don't seem to be very invested in this space, tell me he doesn't think it's going to work isn't very convincing. Chris Urmson saying it might take up to 30 years is far more convincing: http://spectrum.ieee.org/cars-that-think/transportation/self...
But that's 30 years for the entire planet, not until it works at all. WayMo is already letting people ride their cars in Phoenix - presumably one of the places with good weather and easy roads, so I think metro-level deployment in the next 4 years seems about right.
[EDIT]: And on the topic of trains. The MTA in NYC can't even seem to handle basic technical projects, like upgrading signaling infrastructure, or just extending a rail line a few miles without spending billions of dollars, so there's probably a decent chance that some of these are not automated due to sheer incompetence.
To engage with the substance of his argument a bit more, his argument is that perfection is a long way away:
> “Historically human beings have shown zero tolerance for injury or death caused by flaws in a machine,” Pratt said. “As wonderful as AI is, AI systems are inevitably flawed… We’re not even close to Level 5. It’ll take many years and many more miles, in simulated and real world testing, to achieve the perfection required for level 5 autonomy.”
I can believe that, but I also disagree that we need to be there for it to be useful. Level 4 is enough for large scale deployments.
Lots of people make this argument. I have a counter-argument: when a technology is convenient or appealing enough, people will overlook even very obvious safety/health problems. Consider texting while driving, drinking, smoking/vaping, even driving itself. Cognitively, we know that there are a bunch of things we do that are dangerous, but we do them anyway out of habit or convenience. The argument that assumes we demand "zero tolerance" seems to assume that every person is a run-5k-a-day, celery eating, take-the-bus, machine. That's far from the case. We do tons of things knowing they're bad and risky already.
I think it's far more likely that there would be a three step process for this technology to be adopted: first it's early adopters and everyone else is like "whoa that's too far" (I think this has already happened), then people start to uneasily use it, and when it's good enough they realize "oh holy crap, the machine can drive while I watch Netflix! this is awesome" and then they'll use it constantly. Soon, it just blends into the background. Maybe it isn't 100% safe, but what is?
I don't think you'll get the truth out of focus groups, either. Try to hold a focus group on self driving car adoption and 99% of the people will tell you "only when it's perfect." Probably if you ask the same focus group if they text/eat/put on makeup and drive, 99% of them would tell you "absolutely not" when I'm sure that 100% of them do. I think that once a somewhat good enough assist gets into peoples hands, it won't stop.
For a real-life example, just look up "Tesla autopilot almost crash" on YouTube. There are plenty of people who let Autopilot drive when they really shouldn't.
It's important to point out that the WayMo project in Phoenix is only barely getting started, only serves limited areas, at limited times, and presumably in limited weather conditions; and most importantly that there's still a human driver in the vehicle at all times. A full-scale metro-wide service would need to operate at all hours, in most any weather, and without a human driver. That's not happening within four years.
>WayMo project in Phoenix is only barely getting started, only serves limited areas,
Sure but is this because of capability or number of vehicles?
>at limited times,
Sure, because drivers don't want to work late at night.
>and presumably in limited weather conditions;
Why's that?
>and most importantly that there's still a human driver in the vehicle at all times.
Sure, but this isn't due to capability, but for regulatory reasons. As far as I know, the firefly vehicles (the cute little bubble ones) don't have steering wheels, and were in use 3 years ago. This argument doesn't make sense to me.
With software, it's always easy to make something appear to work. All the corner cases make something take 10-100x longer than the functional prototype.
This became painfully obvious with the article floating around where a car misread a stop sign with a small amount of marking as a 45MPH speed limit sign. It's pretty clear that if a car is mistaking a red sign for a white sign, we have a very long time to go before the car can safely drive itself without a human observer to intervene when needed.
When it comes to automated driving, we need to keep our optimism in check and know that it can take decades before we have safe robotic taxis. They may become legal in countries with lower bars for safety before they are legal in the US.
The big problem, I think, will be when the drunks get into accidents because they will turn on autonomous driving, and don't get pulled over. Instead, they won't be sober enough to observe and intervene when the car screws up.
> a car misread a stop sign with a small amount of marking as a 45MPH speed limit sign
That's what happens when the sign were made for humans. They should start putting QR codes under traffic signs, with the human sign as a fall back or as a second point of reference.
"This became painfully obvious with the article floating around where a car misread a stop sign with a small amount of marking as a 45MPH speed limit sign. It's pretty clear that if a car is mistaking a red sign for a white sign, we have a very long time to go before the car can safely drive itself without a human observer to intervene when needed."
I think you misread that article.
Someone built their own toy implementation of an CV sign recognizer. Then, with full access to the code, they reverse engineered visual distortions that could seem innocuous to humans but trip up their model.
It's not a real world issue as far as I'm aware (though obviously signs can be misread or missed, but not invisibly vandalized)
While they did use their own simple DNN classifier (because they couldn't find one), it was treated as a black box for purposes of the attack. A more robust classifier (probably using multiple methods) would be more resistant to attack, but the attack would still work, just the changes would be more obvious. At some point you hit "human-recognizable" but it's unclear where that point is.
As long as you have to intentionally modify the sign to cause confusion, as opposed to doing random vandalism, I don't think this is a problem at all. It's like a person spray painting extra loops to change a 30 into an 80 or sticking a home-made sign over a real one so it's indistinguishable to most drivers. People aren't going to do that any more than they already do and it'll still be just as illegal.
The other day, I came within two feet of a crash because the other driver completely disregarded a red light.
It's pretty clear that if a human can't even pay attention to a light directly pointed at them, we have a very long time to go before humans can safely drive cars without a computer to intervene when needed.
One thing worth mentioning (even though I agree with the gist of your point) is that machines make the same mistakes consistently. This is pretty rare for a human. Imagine 10% of the cars on the road are autonomous and they all make this same mistake.
There are a lot of road design features that come from the fact that humans do make certain mistakes consistently. For example, the US's love affair with four-way stops comes from drivers' inability to safely navigate uncontrolled intersections, even when there's no particular reason they couldn't. Stoplights usually have a delay between the red light for one direction and the green light for the next direction because driver's often run fresh reds, or get stuck in the intersection and need to clear out. There are tons of warning signs because drivers can't be trusted to notice something as simple as a sharp curve coming up.
Of course, the road system is designed for humans' foibles, not computers' foibles, and computers will have to deal with that. But the bar is low.
Computers aren't really as consistent as you say, either. Obviously, a deterministic machine will produce the same outputs for the same inputs. But when your inputs are camera data from the real world, you'll never get the same input twice. For example, my car sometimes misreads or fails to read speed limit signs, but it'll usually read the exact same sign perfectly fine the next time I go past.
I wonder if in the future the law will say self driving cars can ignore all those signs when the car chooses to. Obviously there will be a lot of legalese about it, but most signs can be safely ignored already if you understand all the hazards - this is why people tend to just slow down at stop signs not come to a complete stop.
This is better for the environment, there is a lot of energy lost in a full stop for a stop sign that could be saved: less air pollution/CO2 to deal with.
I think it's the opposite. Humans constantly make the same mistake like being drunk, tired, using a phone. With autonomous cars every single collision will be investigated - where the software is at fault a new improved version will be pushed to all cars, elimination the problem. This systematic improvement will mirror the improvement of airplane systems, but with a much faster time frame
I think his research area (SLAM) might blind him a bit to the engineering side of things. Simultaneous Localization and Mapping is great if it works, but for something with high enough value... you can just map it from the start, and it makes your problem much easier. Which seems to be the approach people are taking.
Mapping does make things easier and it arguably jump-started the current wave of autonomous vehicle work. But then you have to move beyond a precise map of how the world is supposed to be to how the world actually is including all the unpredictable behaviors of humans in the space.
Having listened to Leonard speak, his skepticism seems to mostly be around how you deal with all the things like left hand turns in busy traffic, police waving people around an accident, etc. that happen on a daily basis.
In effect, aviation and automobile infrastructures likewise have 100 year old code bases. And for cars, it varies by state (and county and city but mostly by state).
One of my friends is a train engineer. I can tell you that it's ABSURD how little automation is used in many train systems, and it's definitely not because of a lack of technology.
To give an example, my friend has to memorize the required speeds for each different section and turn in every train he runs, and has to manually adjust the train's speed accordingly.
Literally all of that can and should be done automatically, but the industry hasn't caught up with what is technologically possible.
In Japan it works the same way, although, as everywhere outside of the USA, automation is coming/inevitable.
The white glove pointing is all about being on time, it is a discipline and hitting those exactly on time arrival-departures comes down to train drivers knowing what speed they are doing without looking at instruments, which are covered during training.
Maybe the Danes have their own variation of this obsessive stop-watch training. A system that works efficiently does not have to be computerised, fantastic teamwork and dedicated professionalism can suffice.
In other endeavours, we still have not fully automated coffee, instead of a perfectly adequate beverage from a machine some prefer a human operator to operate the machine as if it was some rocket-surgery skill/craft. I don't think train operation is like that, a faux professionalism, maybe programming is nearer the mark, some programmers use a text editor which is ridiculous when they could use an IDE. Furthermore some programmers roll their own code rather than just re-use some existing module.
Systems to manage train speed already exist, it's just that operating companies have not deployed them fully. It's required by law that these systems be in place, but either they've requested waivers, or the law keeps being amended to accommodate their failure to deploy.
The Copenhagen Metro[1] have been running driverless for a decade. It was the first metro in Copenhagen. Had there been a legacy system to replace I doubt it would have been made driverless. It is properly due to the general "conservatism" (vested interests, old ways of doing it) in established systems. It takes a certain power to disrupt this.
Planes are mostly automated. The pilots are involved for takeoff and landing or emergencies, but little else. All modern Airbus or Boeing planes are mostly flown with autopilot. The human component is still involved primarily for the "what if it breaks" case where the software isn't there today. A computer would likely have never considered putting a plane into the Hudson river, but that pilot saved a lot of people by doing it.
Actual aircraft can be fully automated for landing at least. In the US Army, I flew the Shadow 200 TUAV and it had a thing called the T.A.L.S. (Tactical Automated Landing System)[1] which does exactly what you'd assume it does by the acronymn. It is a monopulse radar tracking system on the ground with a transponder on the plane. You fly the UAV (Unmanned Aerial Vehicle) into a small area at one end of the runway and click the "Land AV" button (A.V. == Aerial Vehicle) and the landing is 100% hands off. You have the option to cancel the landing if wind or the parameters are off down to 10ft, but below that it is land or crash trying!
For trains, the safest ones are automated, but Unions (in the US at least) prevent most of the autonomous technology from being deployed. Where I live, in Chicago, we had a train conductor fall asleep and the train went up the escalator and crashed into the turnstiles at terminal 2 of Ohare[2]. A computer would have prevented this from happening as a computer would neither have been speeding or sleeping. In that incident, the automated breaking systems had never been tested and in fact were configured / setup wrong, so they both failed. This has been rectified since and additional barriers have been constructed. That being said, there are lots of fully automated trains[3], you just might be unaware of them.
We shouldn't demand that our systems work in 100% of cases. Nine Nines -- or even less, in the case of autos -- is enough.
You can argue that autonomous flight doesn't yet have Nine Nines. I would probably agree. But "Miracle on the Hudson" was just that, and extremely rare situations should not be a standard-bearer for allowing autonomous flight. In the same way that suicidal/homicidal humans shouldn't be the standard-bearer for allowing human flight.
That aircraft was mostly landed by the Airbus stability control system. The pilot's job was to make the decision to land it there. Read the book "Fly by Wire", by William Langewiesche.
There's also automated take off. Fighter pilots taking off carriers have to grab a couple of handlebars in the cockpit, whose purpose is only that the pilot doesn't instinctively intervene during take off.
That is how Prof. Missy Cummings presented it in MIT's Human Factors class ~6 years ago. I've never seen a fighter cockpit, but she was a pilot herself, so I'll take her word for it :)
I've heard her tell this story. My impression (as I recall) was that it was to show the catapult operator that you weren't touching anything. But even though this is just about being flung mechanically off the deck, the basic idea applies. When the automated system is doing its thing, it can be harmful to interfere.
Not really. The handlebars aren't to prevent instinctive intervention, but rather to guard against accidentally bumping the controls during high acceleration. At that point the airplane is physically attached to the catapult; the autopilot isn't doing anything. As soon as the aircraft leaves the catapult the pilot takes the controls again.
There is no automation in takeoff for current USN carrier based aircraft. The F-18 is the most automated, and the process is this:
1. Pilot sets takeoff conditions (throttles, flaps)
2. Pilot signals deck crew
3. Catapult accelerates the plane off the deck
4. The plane's fly by wire control system pitches the plane for optimal Angle of Attack for the wings (IIRC - 8.1* without flaps).
Once the aggressive acceleration of the catapult is complete, the pilot takes control again and begins flying - this is less than 1 second after catapult release.
If you look at the stick directly in between the pilot's legs, you'll see him put his right hand on it at ~4.5 seconds. His left hand is on the throttles.
I've heard the opposite. [1] You sound like you have some first-hand experience though, so I'd like to hear what you think of the below:
That’s about the most reckless and grotesque characterization of an airline pilot’s job I’ve ever heard. To say that a 787, or any other airliner, can fly “unaided” and that pilots are on hand to “babysit the autopilot” isn’t just hyperbole or a poetic stretch of the facts. It isn’t just a little bit false. And that a highly respected technology magazine wouldn’t know better, and would allow such a statement to be published, shows you just how pervasive this mythology is. Similarly, in an article in the New York Times not long ago, you would have read how Boeing pilots spend “just seven minutes” piloting their planes during a typical flight. Airbus pilots, the story continued, spend even less time at the controls.
Confident assertions like these appear in the media all the time, to the point where they’re taken for granted. Reporters, most of whom have limited background knowledge of the topic, have a bad habit of taking at face value the claims of researchers and academics who, valuable as their work may be, often have little sense of the day-to-day operational realities of commercial flying. Cue yet another aeronautics professor or university scientist who will blithely assert that yes, without a doubt, a pilotless future is just around the corner. Consequently, travelers have come to have a vastly exaggerated sense of the capabilities of present-day cockpit technology, and they greatly misunderstand how pilots interface with that technology.
I’d like to see a remotely operated plane perform a high-speed takeoff abort after an engine failure, followed by a brake fire and the evacuation of 250 passengers. I would like to see one troubleshoot a pneumatic problem requiring a diversion over mountainous terrain. I’d like to see it thread through a storm front over the middle of the ocean. The idea of trying to handle any one of these, from a room thousands of miles away, is about the scariest thing I can hardly imagine. Hell, even the simple things. Flying is very organic — complex, fluid, always changing — and decision-making is constant and critical. On any given flight, there are innumerable contingencies, large and small, requiring the attention and visceral appraisal of the crew.
I don't think that disagrees with my comment in how I meant it, but perhaps the author of that took what I said a bit out of context from where I see it. Pilots are there for the edge cases which are the 1-5% of the time life or death situations, like the ones listed above. I listed the miracle on the hudson as an example of a human performing life saving things a computer would never do.
Factually, autopilots are what actually fly the overwhelming flight hours in commercial aviation. The pilot takes off, gets to altitude, sets the heading with the assistance of tech like a VOR[1], sets the autopilot, and then just scans the instrument panels for the duration of the flight. They then take over for the approach and landing. I strongly doubt any commercial pilot would disagree with this (I've got several in my family, but am not one myself). The fact is that the number one cause of aircraft crashes is pilot error. So for the 1-5% where a human would indeed prevent a crash as mentioned in your article, over 50% [2] of the fatal crashes are in fact due to human error.
Regarding the last paragraph, I was the pilot on a mission over the Sinjar Mountain[3] range and had my altimeter go haywire and think I was at 20,000ft AGL (above ground level) when I was in fact more like 7500ft AGL. The troubleshooting for a remote plane is exactly the same as any IFR (instrument flight rating aka you can't see outside of the cockpit due to weather) rated plane when you have zero visibility. You trust your instincts and the sensors / instruments. I knew the altimeter was totally full of lies as the plane started descending below the tops of the mountains thereby killing my signal. So I set the camera to "nose" which means straight forward and when I regained communications, I managed to switch it to manual roll ("roll knobs as they call it") and did a turn left as hard as feasible to miss the tip of one of the mountains. Totally ignoring the altimeter, I flew it home with the aid of the camera to gauge rough altitude and safely landed the plane. Do they crash occasionally? Sure. However, none of these were designed with safety critical autopilot and that could certainly be developed with today's software and hardware. That pilot is arguing "computers can't replace me!!!", but in reality if they did, the majority of the crashes[1] that are fatal would not happen in the first place.
From what a pilot told me, I believe the A320 he flies can take off, fly, and land totally on its own. I'm sure the same is true for other planes.
Also, if I recall correctly, he said in certain conditions it's against regulations for a pilot to be manually controlling the aircraft, such as landings in high winds.
Fairly sure it still needs the pilot to tell it "now we're taking off", "now we're cruising", "now we're landing" (and under what parameters). Doing that switch automatically seems unsafe.
The other part sounds improbable - quite the opposite, a pilot always needs to be able to take over in case the automatic systems fail.
"land totally on its own" suggests autonomy. In reality it's a three part certification: pilot, plane (autopilot), and runway. There is only one zero visibility landing system, and that's the ILS CAT IIIc.[1] If there's no ground capability for the runway (does not exist or is down for maintenance) then the plane can't do a CAT IIIc landing.
In practice there is no such thing as landing without an explicit clearance to land. Clearance is given by ATC to the pilot via AM radio. The autopilot has no language listening or speaking skills at all. Numerous clearance modifications happen during a flight, given verbally.[2]
The plane also doesn't taxi itself into position, and it doesn't retract or subsequently deploy landing gear. Many tasks aren't available to the autopilot, nor are many transitions between tasks.
About the last statement, FAR 91.3(a) The pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft. You could construe FAR 91.13(a)No person may operate an aircraft in a careless or reckless manner so as to endanger the life or property of another. as requiring the pilot to use automation if the aircraft manufacturer requires it in certain situations. Otherwise, no, and I have only ever heard of autopilots needing to be disabled in high wind situations.
[2]
Example STAR which most airports don't have, but when they do you'll even see these are really just designed to allow ATC to "plug in" a smaller subset of data like an altitude or speed, without having to recite the entire arrival instructions. Can it be automated? No, because the variables are delivered by voice. The STAR is useless without the variables, and variables are useless without the STAR.
http://155.178.201.160/d-tpp/1709/09077POWDR.PDF
One point would be that both planes and trains are professionally operated already. That means three of the biggest incentives to automate are missing.
First, they have few human-error crashes; they're both significantly safer than automobiles, so unless automation can do better-than-human disaster recovery (not likely yet) there's minimal safety advantage. You have to outperform well-trained humans instead of random people to save lives.
Second, they don't take much time per traveller to operate. Both often have 100+ passengers per operator, making it relatively cheap to employ skilled human labor. Cars average around 2:1, so there's a lot more time being wasted per passenger-hour.
Third, cars have highly flexible usage. Planes and trains are generally moving people or cargo to approximate destinations, and travel to places design to store them. Cars spend lots of time arriving in storage-free areas - what if your car dropped you off at work and drove to a garage outside the city? And they do lots of last-mile transit - what if your car could go pick up your groceries or dry-cleaning without you?
Broadly, I think there's way more safety and financial incentive for 100% driverless operation of cars than any other mode of travel.
Good points. Automated train control maps are not even 2D: they're directed graphs, precisely mapped to sub-foot precision, with all permissible edges fully checked in advance, at design and at mapping. Far easier to navigate than a road network, which can be simplified and approximated to a directed graph, but 1. at the cost of some precision, and 2. orders of magnitude more likely to change unnoticed.
So, if it costs a billion dollars to make a good AI driver, then if you put it in a million cars then it's costs 1000 dollars each and you could eliminate lots of jobs where the main cost is the human (delivery, taxi) so providing a ready sales avenue for your self-driving car (or tech). If you put it in trains then you can maybe drive thousands of trains, and the human is a relatively low cost element of a train service so there's less room to expand.
I don't know if this is true, or even if the logic I outlined above pencils out with real data but it probably contributes in some way.
Planes have been automated to some degree for 100 years. Fully automating a plane at the equivalent of Level 5 is almost a trivial problem compared to doing so with cars. The reason planes aren't automated is mostly social, not technical. Whereas with cars it's decidedly both.
Planes aren't automated mostly for technical and infrastructural reasons, the exact opposite of what you stated. The social worry of pilotless airplanes inhibits the massive investment that would be required to solve the technical and infrastructure limitations of the existing system.
Really? No investment at all? At least it's a falsifiable statement.
Denver International Airport has 10 runways. Only three have ILS CAT IIIc approaches, and that is the only instrument approach procedure that permits full autoland in zero visibility. So you're saying it's completely OK to have 1/3 utilization of the airport's runways for automated landings.
Oh and, those three runways? 34L, 34R, 35R. They all point north. So you can't do auto landings to the east, west, or south. Due to wind direction, your planes won't be able to land on quite a few days out of the year, at all. And in fact the wind could shift while a flight is enroute and now where does it go? All of a sudden the required alternate may be insufficient because a simple wind shift, rather than a change in visibility, becomes the new metric for whether a plane can land or not, and whether the alternate is legal.
This is one logical flaw. There are thousands of these. Totally surmountable, with metric tons of money. But you said no investments necessary.
I said no such investment was needed, in response to your phrase "... massive investment that would be required to solve the technical and infrastructure limitations of the existing system" which I took as implying that there were still huge research problems to be overcome in aircraft automation. Which there are not.
Some investment will certainly be needed. ADS-B is only beginning to be rolled out, and it will be essential for pilotless planes. But the technical issues for automating planes are mostly solved. Aircraft automation is an engineering problem; not a research problem. The investments still needed are engineering investments.
In contrast, getting to Level 5 automation in cars is very much still a research problem, and as such, orders of magnitude greater resources of money and time will be required to solve it.
This is what's so annoying about talking to people who very obviously do not have a sufficient common frame of reference and then go on pontificating about how easy to solve and narrow in scope all remaining problems are. It's condescending, and it's stupid.
It sounds like you think most commercial aviation flights in the U.S. autoland. Almost none of them do. Most flights are landed by pilots. Some portion of the approach is done by autopilot but the landing is done by a pilot. Many airports in the country have mandatory noise abatement. Are you aware that ILS approaches are incompatible with noise abatement? These approaches and landings are hand flown in visual conditions. So your plane can't just use instrument landings designed for bad weather flying. You have to R&D a whole new approach to landing method that's noise abatement compatible, to be able to have pilotless planes doing autolandings. Unsolved problem. Seems significant research related to me, not merely an engineering problem.
Why will it take orders of magnitude greater resources and time to solve the auto driving problem, and yet all such research problems for auto flying are solved? What data do you have that causes you to conclude that one is solved and the other is far from solved? What's the difference in cognition and judgement requirements between the two? What's the relevance of experience? Why do you suppose there are so much more substantial knowledge and experience requirements in pilot certification than driver licensing? Why does it take so long to learn how to become an airline transport pilot? Why have gradations of certification? If it's so much simpler of a system to automate as you state, then it should be a simpler system for a human to learn and just plug into, than learning to drive. But it's the exact opposite, it's much more complex than driving, despite automation.
And yet you're saying no, it's simpler and solved, to just replace all of that cognition and judgement with computers and automation. No more research required, just go built it, according to you. Strikes me as a lot of hubris.
Additionally, air traffic controllers are part of this cognitive and judging process of making planes move around. They train for years as well. But despite every flight requiring two human brains in flight, and part of a human brain involved on the ground, you're claiming that driving cars is more cognitively complicated to automate. Why? What's the basis? A single person does this in a car, and they almost always can multitask totally unrelated things like listening to music, or conversing with passengers.
CAT III landings do not incluce runway egress. Unsolved problem to do without a pilot. Sounds like massive R&D is needed, not just engineering or it'd already be automated.
Autopilots don't do turbulence or icing conditions well. They're disabled in severe cases, and such conditions aren't always predictable or even known until the moment they're encountered. Sounds like an R&D problem, not just an engineering problem. Landing on ice, snow, and heavy rain? Pilots do that by hand, not autopilots. AGain, more R&D needed, not just engineering. For reasons unknown you want us to believe that for planes these are solved problems, or easily solved problems, while identical situations for autonomous cars are still difficult unsolved problems. Why? Sounds like you don't know what you're talking about.
The term autopilot is idiotic. The most sophisticated autopilot follows a defined path, the path is defined by a pilot. It effectively maintains altitude, heading, and speed. That's it. There is no code that enables the autopilot to think like a pilot and redefine the path, which happens in all flights. Clearance changes are common place for many reasons. So you need to design a pilotless version of the current ATC to pilot to autopilot carrier pigeon system. Is it an engineering problem? Or a research problem? Seems like both to me because there's no plan for it right now, no design.
Why is ADS-B essential? It's merely scaled out secondary radar. It solves a very vertical problem. It doesn't have sufficient precision to help with landing. It says nothing about how to utilize this information. Humans do that. Seems to me it's a research problem to create an AI that replaces the evaluation and judgement of a pilot, not just engineering.
sigh
Gusty takeoffs and landings? Autopilots don't do that right now. Sounds like a research problem, not an engineering one.
Emergency procedures? This is 100% the domain of human pilots right now. Code that for the pilotless aircraft. Seems R&D related, not merely an engineering task.
The autopilot needs to meet the functional equivalent of competency for the applicable parts of pilot certification in FAR 61. It needs to be able to conform with all or at least some substantial subset of aviation regulations FAR 91, 135, and 121 as right now a huge amount of this conformance is done by pilots. Otherwise, you're redesigning the entire aviation system from the ground up. Either way this sounds to me like a massive research problem, not just an engineering problem.
This list goes on and I pretty much think you're an idiot, not because you're ignorant, but because you're ignorant, deny it, and then handwave bullshit that all the seriousl research problems are solved and all that remains is building things. It's just - it's really stupid and condescending.
Modern airlines are not mostly autonomous. I'll define it this way: remove the pilots from a flight at various stages along its route and see what happens. Multiple outcomes are possible but one is certain, it's not landing at the intended destination. If it were mostly autonomous, it'd land at destination at least some of the time. The chances of this happening today are 0.
That's because human checkpoints are intentionally wired into the workflow of a modern airliner. If those checkpoints were automated, the plane could indeed fly and land on its own. And I suspect the safety margins are not far from what is needed to automate the rest of the flight.
The space shuttle's only system that absolutely required human intervention was the landing gears, and that was to ensure that humans were always necessarily in the loop. That doesn't mean that the shuttle wasn't _MOSTLY_ automated.
automated != autonomous, the assertion I replied to was that "airliners _are_ mostly autonomous" which is not true.
>If those checkpoints were automated
If there were unicorns...
>the plane could indeed fly and land on its own.
No, and its annoying that you're asserting things you clearly don't understand.
As I mention elsewhere, the only technology we currently have for precision instrument landings in zero weather visibility (the only 100% autoland) is the ILS CAT IIIc approach. The overwhelming majority of airports in the country do not have that infrastructure, nor do most airplanes.
I'm a pilot, you clearly aren't. There are numerous other factors here, but just the fact you can't land wherever you want, at most airports with most aircraft, is enough to prove your assertion false.
Oh, new trains are full auto, no worries. Retrofitting an entire fleet and tracks, now there's the issue - follow the money, and you'll see why even automatic trains are only programmed to follow the usual route, and for diversions, the driver would need to step in. Plus having a person in the cockpit to push the Big Red Button helps with all sorts of legal issues.
Those numbers are in the single digits for aircraft and trains.
We could replace pilots and train operators, technologically speaking. But the incentives are not (yet) large enough to overcome the regulatory and cultural barriers. We'd rather have that extra order of magnitude of safety, because it costs so little compared to other expenses.
The cost/benefit calculation for drivers is fundamentally different.
If you drop the utilization, the cost of the driver as a percentage probably goes up a bit. But here's the thing about those numbers.
Yes, driver salary is a big chunk of taxi cost but flip things around. It's only about half the cost. In other words, there's this widespread assumption that autonomy totally changes the nature of automobiles and automobile ownership even though it only cuts the cost in half.
Ask yourself. If you own a car today, would getting a 50% discount on taxi rides completely change your behavior. Maybe it would for some at the margins but only at the margins.
It depends on the circumstances. On the other hand, for me, if taxi rides were half the price I doubt it would affect my ridership at all over the course of a year. MAYBE for one or two marginal car rentals.
> Ask yourself. If you own a car today, would getting a 50% discount on taxi rides completely change your behavior. Maybe it would for some at the margins but only at the margins.
Look longer term.
Parking spaces in cities cost ~30k. (And in some cities, a lot more than that). Included with a house, they are still part of the price (no free lunch).
Over a 30 year mortgage, that is 1k a year.
In California, car insurance is almost 2k a year. (Oddly enough, 1.1k in New York State!)
So 3k a year just to have the privilege of driving a car.
Apparently an average new car now days cost 31k. (Which seems insane to me, but that is what people are buying).
Being generous and rounding down a bit, that is another 5k a year.
So now 8k a year, before gas, before maintenance, before any warranty problems, before screws in tires, etc.
If mass transit can replace commuting to work, then weekend outings with automated taxis become very doable. If you give $5 per ride (on par with UberPool, though I realize that isn't profitable), it is 1600 rides a year to break even with owning a car. (Though ownership goes down after the car is paid off, 600 rides a year to break even!)
1) Now the taxi can operate day and night without substantial breaks. That reduces per-passenger-mile amortization costs by at least a factor of 2.
2) Especially since amortization cost is also reduced, you might as well electrify the vehicle (this requires some sort of robot arm for fast charging, but I suppose you could pay someone like $0.50 per cycle to plug and unplug a bunch of vehicles in the interim). That reduces fuel costs by a factor of 2 and maintenance costs even more.
3) No driver means room for more people or ability to shrink the car, reducing all the above costs.
So you've now reduced all the other costs by a factor of 2. That means your overall costs are now reduced by 75% or more. Yeah, that's starting to look pretty attractive.
1) It's already very common for taxis to operate two shifts per day with two drivers. Also, there's less demand in the middle of the night than there is at peak hours.
Planes can take off, cruise and land themselves in any survivable weather condition with existing automated controls. However, people would be terrified of flying without a pilot and there are so many legal hurdles that it's not cost effective to do so.
Also: where are the self-driving attorneys? A very constrained problem domain where all the answers can be looked up in books, prior cases etc. Vastly expensive human drivers needs at present. No chance of killing anyone. We should have armies of automated lawyers already, no?
The Concept-i looks like it could come straight from Wall-E. I agree with you in general about concept cars being produced/released, but for this specific case, I don't mind. Looks funky
That car had nothing but positive reviews from all corners of the internet. I am surprised Cadillac canceled its production and thought of making a coups instead of a convertible.
> Man sure would be cool if any concept car ever presented were actually released.
Not many are, but some are occasionally. I personally own one of the few existing Isuzu VehiCROSS vehicles. It started out as a concept car in the 1990s, then went into production virtually unchanged (only a tad under 6000 were ever made).
It speaks to the trends of the day that an article which says something patently obvious to people in the tech field is considered news.
This reality seems like it may pose a problem though, as some people's business models and valuations depend on this article being wrong.
The thing is, we don't need full level 5 to get useful stuff out of this line of tech development. But in terms of the futurist vision of driverless cars everywhere, fleets of cars replacing private ownership, etc -- hard to see that happening. If the tech takes more than 10 years to develop, it's very likely something else will develop in the meantime which will essentially invalidate all of those earlier visions, as the world will have gone in a different direction in the meantime.
Exactly. I park in a very small space in a busy city. While backing out I must monitor the street, pedestrians, and the cars parked to the front, back, and side of me.
Just this week, I missed one of those variables and cost myself $500 in damage. This is something a backup camera and automated system can and already does easily help with, and is a major efficacy leap. Small steps like this can really add up.
What I would like to see next is a system that alerts the driver when the light in front of them turns green. I feel like I lose 5-10 minutes a week sitting behind people who are checking their phones unaware the light has changed.
> What I would like to see next is a system that alerts the driver when the light in front of them turns green. I feel like I lose 5-10 minutes a week sitting behind people who are checking their phones unaware the light has changed.
I wish I could upvote this idea straight into the car manufacturers inboxes. Nothing more frustrating than having to sit through multiple light cycles cause the dipshit in front of you is taking a nap, conveniently waking up with enough time to get themselves through the light but not any of the people behind them.
It works really well when it's the person right in front of you, because you can see that they aren't paying attention. Put a few cars between you and them, and all of a sudden you don't really know the reason they aren't moving and there is a significant delay before you can infer that they aren't paying attention. It would be so much better for their own car to recognize that they are in la la land and remind them asap.
Audi is trying to integrate red light timers into their cars. Of course this requires cities to integrate their traffic light control systems so it won't happen soon.
You're right, and we're already benefitting and have been for years: basic electronic cruise control saves a ton of fuel just by taking the job of maintaining highway speeds away from the human.
We'll continue to benefit at each level: speed match, safe-distance maintenance, emergency brake, lane follow - there are already people alive who would be dead without them - you can find them in 5 seconds on youtube. The fact that articles have to keep trotting out same sad story of the bloke who drove into a truck while watching movies when he should have been driving tells you a lot about how safe these systems actually are - if there were dozens of counter examples, we wouldn't be hearing from the head of Toyota's RI, we'd be drowning in dashcam snuff films.
All that said, yes, I agree it's obvious that level 5 is a different beast to all of the 'easy stuff' that we have now: classic case of where the edge cases cover more area than the core of the domain does.
>in terms of the futurist vision of driverless cars everywhere, fleets of cars replacing private ownership, etc -- hard to see that happening.
Please tell this to city governors who are so keen at throwing money to stupid start-ups peddling this nonsense at the expensive of other things, like proven public transport, for example.
Stuff like this being "patently obvious" to you should be an impetus to get the word out, because those for whom is isn't are the ones making dangerously imprudent policy decisions.
Again, not to invalidate the use of level 4 automation and such, but ya'll need to crush this fantasy of level 5 in 2 years, for all our sakes.
> throwing money to stupid start-ups peddling this nonsense
And this is why god invented short-selling. If startups were openly traded instead of privately funded, I think we'd see a lot of technologists setting up massive bets against "full automation" and "solve everything with AI" projects.
It's not easy to get people (especially politicians) to listen to bad news, no matter how loudly you shout it. But in a lot of markets, downward pressure comes from the ability of cynics to enter the market and get payouts from guessing (or knowing) that some promise is impossible.
In the meantime, I suppose we'll all keep yelling. But when trigger-happy investors and unaccountable state funding are shaping the market, I'm not sure how much good it'll do.
> It speaks to the trends of the day that an article which says something patently obvious to people in the tech field is considered news.
Yes and no. In any given Tesla article here you can also read many people claiming that "it's basically a solved problem and Tesla and Uber will be fully automated in just a couple of years", too.
Maybe I'm ignorant (very likely) but I've never understood the difference between full autonomous driving and a fully intelligent AI.
If a car can drive in any environment and react to any circumstances surely said AI could also do anything else, no?
EDIT: Thanks for the responses everyone. Though, to clarify, when I think of "full autonomous" driving, I'm thinking of a car that can go from A to B regardless of the context. Meaning, if some of it is offroad it'll handle that, if there's traffic that'll be handled. If there's something wrong with itself, aka the car, it'll be introspective and call for assistance, sending its location, etc. Not just following marks on a road and pulling over and giving up if it can't get there.
Also, I do not think a "fully intelligent AI" can necessarily solve any problem, but is capable of learning such that it could. I the purposes of a discussion I'd equate it to maybe a 5 year old.
Driving is just a tiny subset of what a general AI could do. In the grand scheme of things to which AI could occupy itself with, it's an absolutely trivial undertaking.
nope. Because they're not building something that has decision making or consciousness. What they're building is a very complicated and nuanced math equation, that reads along the lines of: when a camera sees a line of X whiteness at Y distance, and we are travelling at speed Z, in this geographic region, turn the wheel left E percent. What these companies are doing is trying to calibrate such an equation so well, that it can successfully navigate roads on its own.
So true. I've always thought that fully autonomous driving is a slippery slope to AGI. Imagine Google having to collect an INFINITE amount of driving data (the perfect manifold) before they realize they've sunk billions into an intractable problem?!
Problems in AI can very easily turn into a black hole of money and data. I hope the large number of AI startups appreciate this.
The difference is that the same algorithm that drives a car can't be used to do other tasks. The self-driving car algorithm can't just be plugged in to recommend a movie or summarize an article. Many of the same underlying components can be re-used, but they have to be trained on data for the new task.
I do not think that a 'fully intelligent AI' is necessary for full autonomous driving: if a universal communication protocol for driving were to be introduced, cars could communicate with one another as well as announce themselves when entering intersections for example, therefore mitigating the need for an 'AI'.
"I am driving at 45mph at [GPS Coordinates] [UTC Timestamp]".
Can you imagine writing a computer program than can drive a car in a video game, but not provide relationship advice to two freshly married people, or think of a fair and just and enforceable law to reduce discrimination at the workplace?
Driving is quite a bit short of "same intelligence as humans"
what if there are 2 obstacles that the car can't brake fast enough to avoid so it has to decide which to hit. one is a person pushing a baby in a baby carriage, the other is a person pushing someone a wheelchair
if someone steps out in front your car going 30 MPH can you stop in time? i mean they walk right into your immediate path. do you think this kind of thing can't happen?
That isn't the problem you posed. Now there is no maneuver that can save the pedestrian, for any driver.
But for the most part, if there is low visibility and pedestrian proximity, 30 MPH is too fast. The streets here don't have much obstructing visibility other than parked cars, there are good sidewalks set well back from the streets and the speed limit is still 25 MPH.
The exact same thing that happens now: someone dies, usually selected arbitrarily based upon the driver's selfish desire to minimize harm to themselves. Because that situation doesn't allow enough time to make anything except instinctual choices.
The driver is found at fault for that death if they were not driving legally. Otherwise, and very often times regardless, the whole thing is written off as a tragic accident (millions happen every year, just in the USA), and the insurance people do their thing.
Ethical dilemmas are a non-problem for self-driving cars.
I will loosely paraphrase something I heard Sebastian Thrun remark on this situation (commonly known in philosophy as "the trolley problem"):
1) such a situation happening is extremely rare (more so in the classic example of the trolley problem)
and
2) regardless of its possibility, if we are able to reduce the number of fatal and injurious accidents caused by automobiles by half, in the United States alone, such tragedies should not stop us from doing so
For some reason, people always let the impossible attainment of perfection be the enemy of good enough (as the old saying goes).
We will never get a perfect system. We don't have a perfect system right now. What we can get (indeed, I'd argue we're already there in the case of self-driving vehicles) is close enough that the difference is fairly negligible. That is, such systems can be made vastly better than the average human driver, and generally will come close or exceed that of professional drivers.
We are arguing that we'd rather have a professional taxi driver at the wheel of a vehicle transporting us, who is 98% competent at his job (that's being generous, actually), rather than a machine which is 99.95% competent doing the same job (note that neither percentage is based in reality - I pulled both from my nether regions for this example - but they probably aren't too far off the mark, either - well, again, I'm being generous with the taxi driver).
It's a purely irrational and emotional response not based on actual statistics and knowledge about accident rates. We'd rather continue letting drivers get in accidents, injuring or killing themselves and others, at a very high daily rate, than implement a technology which would rapidly make that number drop to very low levels over time.
Part of it I think is that we want to be able to blame somebody. We can't blame a machine, for some reason, or its manufacturer - especially in the heat of the moment (provided we survive). We can instead blame ourselves, or more generally "the other guy" for being a bad driver. We have this innate problem with being able to say both "I don't know" and "bad things can happen for not any good reason", and instead must find something or someone to blame, and if it isn't ourselves, even better (hence a lot of religious expression not based on reality).
There is so much benefit this technology can bring, even today. I don't personally think it is ready for consumer adoption quite yet, but I can see it being ready in less than 10 years, maybe less than 5. The problem for its adoption is that we'll likely never be ready, even if it attained a five 9's level of reliability, simply because if it failed, we'd have no one to blame but ourselves for trusting in it. For some reason, that simply will not do. We'd rather continue with the status-quo and continue to rack up the injuries and death, because at least then, we can blame the other guy instead of ourselves.
I have to account for offroading animals while I drive/bike because deer love to sit on the edge of a road, and inexplicably leap in front of me at the last second or if I see some crossing the road ahead, know I have to slow down just in case it has some deer in the pack behind it that are following or the deer changes its mind because it doesn't want me to be in between it and it's herd to go back where they came from. This can involve scaling what appears to be a ten foot almost vertical wall of rock. Fawns will often wait until you get right next to them before moving because their defense mechanism is sitting still and hoping to not get noticed. So automated cars need to know what deer look like and how they behave since they are so common. Very occassionally I've had to avoid mountain lions and bobcats but based on road kill I've seen most people just drive over bobcats.
As others have said, the scope of 'Anything that can happen on a road', while large, is a lot smaller than 'Anything that can happen or be imagined to happen'. Moreover, if a pilot AI finds itself in a situation it can't deal with, the failsafe of "hazards on, slow the car and pull over" is nearly always available. There's not really an equivalent for a general AI.
Think about the difference between a specialized AI that can beat a grandmaster at chess (using heuristics) versus a generalized AI that can learn how to play chess and then figure out how to beat a grandmaster on its own.
Animals are very good at responding to specific situations with fixed action patterns.
Humans are very good at adapting to new situations by training new action patterns.
Self driving cars tread far more in the territory of machine learning than AI. AI involves ML, but ML doesn't necessarily involve proper AI. ML is essentially creating a framework of existing data that the computer can apply to similar situations. This is why you hear terminology like "training a dataset" in ML -- you are just telling it to act upon new situations in as similar of a capacity as possible to previous situations.
AI to me is a very different thing, in that it doesn't require structured inputs and outputs. Even as chaotic as autonomous driving is, it's still a structured system that takes inputs about road rules and surrounding objects and aligns the car with an outcome where it follows road rules and doesn't hit anything.
True AI is a machine that can reason for itself in unstructured situations, which would likely involve ML that is very good at making inferences about how existing data applies to tangentially related situations. I'm sure at some point there will be a distinction between AI that aims to create outcomes as similar as possible to human decision making (perhaps it could be trained by you to make decisions the way you would), and AI that aims to emulate human thought process literally down to the neurotransmitter level.
Alternate title: Toyota executive frantically tries to convince people to keep buying non-automated cars to protect Toyota's revenue stream while other companies say automated cars are imminent [January 2017].
California is on the verge of approving new regulations that allow automated cars on public roads with no humans in the vehicle, and as soon as that happens, things are going to move faster than the people who believe 10-years-late Toyota think.
I've been saying this for a while. The problem is that people are mostly familiar with the rate of overcoming traditional technical challenges (e.g., making your current computer/phone more compact, improving graphics quality, making cameras and screens with more pixels, etc.). What people don't realize is that full autonomous driving will require more than just faster/smaller/cheaper technical innovation - it will require the refinement of innovations that probably haven't even been invented/researched yet.
I can't help but think about speech recognition 20 years ago. Many of the hot software packages claimed something like 96% accuracy, and that sounded great on paper. People thought intelligent voice-computer interfaces were just around the corner, yet here we are in 2017, and Siri/Alexa/Cortana are barely becoming usable (but still frustratingly lacking in many situations).
That's what happens when you invest in hype. There's no way we'll be able to hail an autonomous Uber in the next 5 years, which is an eternity in startup land.
I strongly disagree. Uber has markets that make money and they release new services all the time. There are ways for them to stay in the black if they scale things back and fine tune different markets.
Someone is going to be the first in self driving cars. Uber has a tremendous demand for them. It would be stupid to not at least be trying to be the first one on the scene.
Maybe for this particular investment, but I'm pretty sure the wider intention was to legitimize business practices where the poor live tenuous lives as an app-directed servant class. Uber made this kind of neo-feudalism so socially acceptable that we now celebrate "Uber-for-X" as "innovation."
It already happened before Uber, except now the app is in the hands of the person being exploited, not the managers creating optimized schedules that make it near impossible for individuals to hold enough jobs to pay the bills.
You must have replied to the wrong post. This agrees with my point that this kind of tech is abusive, but doesn't address my point that it's a long-standing problem older than Uber. Managers in retail and food service have used scheduling software to screw workers over for years. Uber just shifts the technology into the hands of people it exploits (smartphone app instead of scheduling software in the manager's office + calling in to get hours/find out you aren't getting any).
You seemed to imply that this is a new phenomenon that isn't yet legitimized. It's been around for a long time. It was and continues to be a problem, but Uber is just a microcosm of it.
You've hit the nail on the head - audio recognition is quite poor after (a lot of) concerted effort over decades. Driving a car safely is significantly more difficult (and dangerous).
That being said, Toyota's (and Subaru's) approach is the smart way forward - add sensors and capabilities that augment the human, but leave the human always responsible and in control. In order for a crash to happen, the human AND the machine BOTH have to miss it.
Once the machine is good enough (and we figure out what "good enough" means), then it can take over driving. But not before.
Once the machines are better than humans at a macro level, insurance companies will notice, and I think this will cause widespread adoption pretty quickly.
This has already started - look up the discount some insurers give to owners of Subarus that are equipped with EyeSight. The numbers show their collision avoidance is making roads safer.
Also, interesting fact: Subaru is achieving these impressive stats using stereo vision, not lidar.
I recently bought a Subaru, and I agree. However, this is in no way at all related to "self driving cars" (which don't actually exist and therefore nobody knows how the insurance companies and general public will react to the liability/safety/skin-in-the-game issues).
I don' think that's true in terms of transportation systems.
In terms of cars: Streets where built for people or animals walking on them and cars quickly started killing large numbers of people. The response was not to make cars safer by say physically limiting them to 10 MPH, but to remove people from streets.
A fairly high fraction of early aviators where killed, that did little to slow adoption of aircraft.
Astronaut deaths where sadly common, but also expected and did little to slow progress.
Cost/benefit--if the rewards are judged to outweigh the risks and there is a clear benefit, people will adopt it. If money/time can be saved, it will be very popular.
> The response was not to make cars safer by say physically limiting them to 10 MPH
Umm, reasonably sure that implementing a legal speed limit was an initial response, in the UK at least. It took a number of years (and likely a lot of propaganda, though I haven't looked that closely into it and history is, of course, written by the victors) to allow motorcars to use roads at anything close to a reasonable speed.
I am referring to a governor which would prevent cars from being capable of 10MPH even on empty roads. Actual in town speed limits would presumably be lower. Now, with our infrastructure that seems like a huge issue, but cites used to be built for walking and even small towns simply had vastly higher density.
Indeed, the speed limits are completely reasonable if you don't want to remove people from the streets - but your suggestion seemed to be that people were near-immediately removed from the streets instead of limiting what cars can do at first. There were 30 years when cars were lawfully limited to 4mph in rural areas, so your "even on empty roads" point is moot.
There where not exactly a lot of cars over most of that 30 years. However, my point was more cars had strong advocates pushing adoption where pedestrians where simply used to how things where and never really organized to protect how things where.
Thus you get jaywalking as a crime, and justified many deaths as people breaking the law instead of the naritive that cars are insanely dangerous and should be banned. In theory cars could have been banned, but in practice their advocates had little trouble changing perceptions.
Note that jaywalking is very much not a crime in the UK. Off of a motorway (which is a relatively recent invention), roads are still for people to some extent. Hopping off the pavement for a couple of seconds to bypass someone who's decided to stop right in front of you is semi-normal, much of rural UK doesn't even have pavements, and we get taught how to Stop, Look, Listen, Live in order to cross a road without a traffic light as kids.
If a pedestrian starts to cross the road, they have right of way. I've been on roads in some city centres where vehicles have to be very, very careful as there's a road through a pedestrianised area and people are rather bold crossing the road. If the vehicle was to cause injury, the driver would likely be liable.
No, it's literal control. They'd rather operate a vehicle in a way that's statistically less effective (where effective is defined by a collection of algorithms, we all know how perfect algorithmic generated routes are) than lose that control for a small benefit. Control itself is worth something to people.
A very large factor in this reasoning is that people look at the statistics and then intuitively decide the statistics will not apply to them because they are much better than the average driver.
The scenario in which insurance companies drive adoption of self driving cars depends upon insurance companies accepting the risk of system failure due to manufacturing defects or limitations that generate the wrong response in particular conditions.
I don't know by what metrics you're making these comparisons. Audio recognition and self-driving vehicles are apples and oranges.
Autonomous driving is difficult primarily because of a changing world and low tolerance for mistakes. The state space of situations you're trying to map is just large. It will take time. But it's not even close to impossible.
Audio recognition is difficult for a different reason: language is difficult to disambiguate without context. So today the limits of our audio recognition bump up against knowing the context of words you're saying. It used to be the case that audio recognition had fundamental difficulties e.g at word boundaries but those challenges are mostly solved. Some of these challenges might not be solved without improvements in areas outside of the raw listening part of ASR.
> That being said, Toyota's (and Subaru's) approach is the smart way forward - add sensors and capabilities that augment the human, but leave the human always responsible and in control. In order for a crash to happen, the human AND the machine BOTH have to miss it.
This is your opinion. Folks at some other companies (e.g waymo) don't think this makes you safer. Human attention might not work the way you think it does.
I think that audio recognition is an order of magnitude less complicated than fully autonomous driving. The world is fully contextless outside the road to an not-even-at-insect-level brain driving two tons of metal, and anything can happen.
Voice recognition is getting just barely tolerable. If you have tried to do something important using voice recognition you'd probably go bat shit crazy after a while. If it had the potential to kill people no sane human would allow it.
Voice recognition also has a lot less market demand than self driving cars. Technology can be invented that is decades ahead of other fields if enough money is invested.
That is why I think self-driving cars and voice recognition are not comparable problems.
> Voice recognition also has a lot less market demand than self driving cars.
Just to expand on your point, today's 'self driving car' technology we may equal to where 'voice recognition technology' was 20 years ago. But 20 years ago, there is no incentive for industry to invest say $100 Billion in 'voice recognition technology', the benefits did not warrant such a huge investment. Where as'self driving car' technology there is potential to reap benefits that is why industry today investing $100 Billion ( all combined investments from all players )
> Technology can be invented that is decades ahead of other fields if enough money is invested.
> That is why I think self-driving cars and voice recognition are not comparable problems.
The huge economic value the can be created with perfected 'self driving car' is in the order of hundreds of Billions of dollars per year world wide combined . Here is an example scenario
In 2025 'self driving Electric Autonomous fleet' vehicles by Uber, Googles of the world offer 'miles plan' ( like our 2005 mobile monthly plan N minutes/month ) that is like 1000 miles per month you can hail any time for a monthly price of $300/month . This is huge economic value for at least 50% of USA drivers and people will embrace it.
Typical USA drivers total cost of ownership of CAR today is in the range of $400 to $600 per month ( CAR price + Repairs costs + car Insurance + gasoline cost) . This is with out counting 1.5 hours/day you have pay attention to drive, the time that can be used for other thing with autonomous fleet
> The world is fully contextless outside the road to an not-even-at-insect-level brain driving two tons of metal, and anything can happen.
See that's where you're wrong. An autonomous vehicle doesn't have to have a human brain. It has to have very accurate sensors and classifiers tuned with generous safety params.
Like any vehicle it also needs redundancies for safety and sane default behavior (mostly "slam on the breaks").
Also, not "anything can happen." There are rules that govern the road that make this problem tractable. There are rules that govern physics that mean if you have adequate sensors + compute you can avoid hitting any objects at all, whether it's a person or otherwise. "Object x is 30 ft away with trajectory y, we're going 15mph, slam on the brakes."
The difficulties mostly lie in how we can "not slam on the brakes all the time and never make progress." e.g if you enter a construction zone you better know what to do and not just sit still, otherwise you'll never get anywhere.
Would you please not do this here? We're trying for at least a slightly higher quality level on HN. One thing that helps is not to assume stupidity in the other; we even added a new guideline recently to say that. More diplomatically though.
Audio recognition may be apples and oranges to self-driving vehicles. But as it relates to autonomous driving, audio recognition (and I don't mean speech but general sounds) is something nobody has been talking about (at least publicly) as a sensor input for self-driving cars. Think about how much you rely on your hearing to drive safely and well. Sure, we use sight/depth and so forth substantially but we also rely on our ears listening for different kinds of sounds that can occur from any direction to warn us of changing conditions, unsafe circumstances, and so forth.
We can't even get speech recognition right and meanwhile audio (sounds in general) aren't even being seriously considered for self-driving purposes, AFAICT.
Curious, isn't it? Humans rely on auditory information to enhance situational awareness so you'd think that would factor into the design of a machine that needs as much situational awareness as possible.
For example, emergency response vehicles announce their arrival and direction often long before any visual contact is made. How will deaf autonomous cars recognize incoming emergency responders if the view is obstructed? Will they pull into the intersection just as a Fire Engine is running the red light?
Auditory input was deemed important enough that California even made it illegal to "wear a headset covering, earplugs in, or earphones covering, resting on, or inserted in, both ears" while driving just last year. [0] I suspect other jurisdictions have/will follow(ed) suit.
According to a quick google, noticing that other cars are pulling in/slowing down usually works if they can't see the emergency vehicle. Still, it's not "perfect", but then Hearing drivers are far from perfect either, and there's plenty of failsafes before an accident happens.
Fire trucks and ambulances I've seen when driving against traffic lights/flow of traffic always slow and stop to make sure that there will not be an accident even if they have the right of way - they don't drive like police in pursuit on TV. I'm sure there are exceptions but the general case is the emergency vehicles are planning for other traffic to not yield until it is clear the other traffic is yielding.
I'm sure there are exceptions but the general case is the emergency vehicles are planning for other traffic to not yield until it is clear the other traffic is yielding.
Yes, if the driver is well trained. When I was firefighting and instructing drivers, we always emphasized that while the red light and siren gave you the right of way:
a. they don't absolve you of liability for a crash (at least under NC law)
and
b. if you get in a crash on the way to the scene, you're not doing anybody any good, PLUS you've now created another incident requiring another emergency response, PLUS another company (probably coming from further away) has to respond to the original call.
I can't speak for police, but firefighters are actually, in my experience, taught to be pretty conservative when it comes to running red lights, proceeding against traffic on one way streets, and other similar scenarios.
We also always used to emphasize "it does no good to get halfway there, real fast".
There seems to be some limited automotive technology available that would convert sirens to visual signals, akin to doorbell or telephone strobes.
But the more interesting adaptive technology seems to be a re-wiring of the brain. [0] Deprived of auditory input and with a heightened reliance on visual cues, the autonomous driving version might be something like Lidar + other driver facial recognition.
Per [0] they are allowed to drive in the US, and do not have higher accident rates than hearing drivers[1]. Also, "There are now devices available that can be used in r cars that react to the frequency of sirens and which emit visual warnings of approaching emergency vehicles to assist deaf drivers and those with hearing loss" (which could presumably also "assist" self-driving cars).
[1] This web site's bona fides aren't clear, but its mission does not seem to specifically include advocating for the deaf, so there's no obvious reason for the information presented to be biased in favor of the deaf.
Telling which direction a siren is coming from is really, really easy. You can do it with three electret microphones and a $1 ARM chip. A simple TDOA multilateration algorithm will do a far better job of it than a human, even in complex reverberant environments.
Reliably understanding speech is effectively strong AI. Humans don't speak very clearly, but we're spectacularly good at inferring meaning from context. Accurate transcripts of colloquial speech are often completely incomprehensible, because a vast amount of information is conveyed through context. When we're listening to someone speak, we're decoding the phonemes into symbols, but we're also constructing a model of the speaker's mind and predicting what they mean to say.
I get your point but to nitpick, written and spoken language are not equivalent. We don't say things like "quote", "comma" and "question mark". We use things like intonation, gestures etc.
Agreed. I would think audio recognition is not comparable to driving, since the scope of recognizing all voices, accents, languages is vastly different than the set of rules for driving and the complexity of the road.
Realistically speaking, even if Tesla or any other manufacturing company do get to that level of a "super advance Einstein-genius" self-driving car it will still be: "not even close" if it's riding alongside the highway with 18-year old Billy-bob speedster with a penchant for cutting lanes and living dangerously.
Most car accidents are due to human error.
If all cars on the highway are self-driving cars, with the ability to learn and communicate with each other through mesh-network or what-not, with predictive capacity -- it will be a lot better than any driving human that can only see what's in front of them. We just need to get to that level where most cars on the highway are smart self-driving capable cars and we can build on top of that. No one should be relying on self-driving cars to do everything at this point in the game because we're not there yet.
> language is difficult to disambiguate without context.
So why don't Siri/Alexa/etc developers give them context? I would LOVE to be able to say "Alexa, play playlist on shuffle mode" or even simple "Alexa, turn on shuffle and go to next song"
But no, the word "and" is completely off limits. Saying "shuffle playlist" instead of "play playlist" also doesn't work. I have to literally use 3 commands to do what I want:
"Alexa, play Taylor Swift" ... "Alexa, shuffle" ... "Alexa, next song"
With context, I could say "Alexa, play Taylor Swift on shuffle"
Apparently Siri is gaining the ability to deal with follow up questions in iOS11. Excited to try it out.
Spotify is default. She also knows how to do noun recognition. If she can tell the difference between “play” and “stop”, she can differentiate shuffle and Spotify.
I’m also ok sayin “Alexa, shuffle Taylor Swift” if that’s easier for their grammar.
That being said, Toyota's (and Subaru's) approach is the smart way forward - add sensors and capabilities that augment the human, but leave the human always responsible and in control. In order for a crash to happen, the human AND the machine BOTH have to miss it.
That's the approach we know won't work. That was Tesla's first-round autopilot, the self-crashing car. The one that would happily run into obstacles protruding into a lane.[1][2][3] There are two big flaws with that approach. First, if it's good enough that people can tune out, they will tune out. Second, if the driver waits until the self-driving system has clearly made a mistake before taking over, they will be slower at reacting than if they were driving, and may be too late. See especially [3].
Volvo's CEO takes the position that if one of their cars crashes in self-driving mode, it's Volvo's fault. Urmson, while at Google, pointed out the first problem coming up even with their test drivers.
> And it's also called "Autopilot" which sets an extremely different tone and expectation than other driver-assistance/safety technologies.
Only to those not familiar with autopilot systems in aircraft.
It's like the use of "hacker" - here on this site it still (probably) has most of it's original meaning. To the rest of the world it means a guy in a ski-mask with a russian/chinese accent who's after your bank account.
Unfortunately most of the world thinks pilots in commercial aircraft turn on the autopilot and can then go for a sleep, so when they see "Autopilot" on a Tesla, they think they can do the same.
Only if you're happy waking up to find you're about to plow into another large object at a high rate of speed.
The Pilot in Command is expected to sit in their seat and monitor the aircraft, surrounding airspace and the radios, and be ready to take control at a moments notice.
They can't go for a snooze without handing that over to another pilot first.
> That's the approach we know won't work. That was Tesla's first-round autopilot, the self-crashing car.
I think OP was talking about "crash avoidance" systems. Not systems that claim to drive the car but not really.
You're always driving, the car is just silently observing. If the car sees you're about to crash, it steps in and takes over. The opposite of Tesla's approach where you play the role of observer taking over when things go wrong.
The first approach is safe. The second approach isn't safe until full autonomy due to cognitive load and task switching delays.
There are many automotive "auto-brake" systems, usually controlled by a radar. Those are useful. They're usually programmed to act only when a collision is inevitable. They prevent some slow-speed collisions, and reduce the damage in some faster ones.
What they don't do is take over steering. A last-second takeover of steering would make things worse some of the time.
> What they don't do is take over steering. A last-second takeover of steering would make things worse some of the time.
I have a Honda. While its lane-keeping system won't steer during a crash, it definitely does steer to keep you centered in the lane (when it's able). That could help avoid all kinds of crashes due to inattentive lane-drifting.
I really don't want those on in stop and go rush-hour traffic. They're very likely to cause someone /behind/ the automated car to have an accident. (Yeah yeah, following distances; trying building enough capacity for everyone to do the proper following speed and keep distance.)
Tesla took the opposite approach of what I described (the Subaru approach).
Tesla took the approach that the machine is in control and the human has to detect and take over when the machine makes a mistake. That is known to be a very problematic strategy, it's easy for the human to get distracted.
What Subaru has done is to leave the human in control, the machine only steps in when it sees a problem.
It's totally different safety outcomes. In fact, not even the same problem - one is "self driving", the other is "crash prevention".
Yes they are different. They are also WILDLY different value propositions. Crash prevention is nice, just getting safer and safer is nice--though I personally wouldn't stop driving ~$5k cars for even "really good" crash protection.
But a car that can drive me while I sleep and become a Taxi while I work is something I would go into debt for.
But a car that can drive me while I sleep and become a Taxi while I work is something I would go into debt for.
I don't think most regular people will actually own fully autonomous cars to rent them out. Big players will do the capital expenditure to buy a fleet, and rent out transportation service. Renting in uber style will be basically the price of owning, but without the up-front capital expenditure, and it will provide the convenience of choosing the type of car for the specific transport needs of the moment. The margins of these rent-a-fleet services will be low, so you won't have a hope as individual car owner of actually buying a car and renting it out without turning it into a loss compared to just renting.
> Tesla took the approach that the machine is in control
That's not really accurate. Tesla says the driver is ultimately responsible. That holds unless a court determines otherwise. The same is true for Toyota/Subaru. They're all SAE level 2 systems.
The opposite approach from this would be Waymo/Volvo who intend to take responsibility for decisions made by their vehicles. Those are SAE level 4 or 5 systems.
Whether or not it "works" it has the overwhelming benefit of being an incremental approach. Incremental (evolutionary) change tends to work while revolutionary is more tricky. e.g. video phones were demonstrated in the 1960s but are still not really widely used. Also no need for Star Trek - level AI, which is a significant advantage.
Seriously? All smartphones are video phones. I fail to see how they could be more widely used.
You may mean that people choose to make audio-only calls... but just because people choose to do that, it doesn't mean the technology isn't here & fully available in affordable, commercial, "good enough" form.
Interesting. The three of them show the car being hit from the left side when driving on a high-way. It seems that the autopilot is bad at recognizing objects that suddenly appear on that side.
More like, if you are not a car on the left lane, then you are not. Bam, you got an accident.
Edit: Is the last video in the US? Is it normal for other drivers not to stop when they see an accident?
Right. Tesla's round 1 "autopilot" used Mobileye's car recognizer for obstacle detection. If it didn't look like a car, it didn't get recognized as an obstacle.
I wrote a previous YC posting on this, on why you need to use geometry first, then object recognition.
Even better than speech recognition, handwriting recognition.
Hello, Apple Newton, may you rest in peace. I'm not convinced we'll have handwriting recognition within my lifetime, and if we do I imagine that shortly after I'll go looking for Sarah Connor :)
Automated reading of checks and of envelopes is working very well. Automated envelope reading is now so good that the USPS only has one center in the US where humans look at images of envelopes. They just look at the hard cases, and if they show up there, they're really hard.
I use Google voice dictation to transcribe emails and text messages on my phone constantly. Not like "I tried it for a week and then went back". I have been using it daily for months. It's now vastly faster than typing it in by hand on a phone, and even the per-word error rate is actually better than mobile typing.
The interesting distinction isn't between life-dependent and non-life-dependent, it's between worse-than-human and better-than-human. I wouldn't "trust my life" to never making a typing error, and yet I trust my life to the driving my fingers do...and statistically, I have a small but real chance of death.
I disagree with the analogy to speech recognition. What makes speech recognition difficult is that it's highly dependent on context (sounds the same, but means something different depending on surrounding words, what the conversation is about, or even who you're speaking with). With driving, you should be able to make a good decision with just the instantaneous state, given enough information about that state (objects, velocities, etc). You can argue about what constitutes "enough information," but it seems plausible to me that given enough sensors we could meet or exceed the amount of information taken in by a pair of human eyes on a swivel.
By context you mean a timeline of events while speech recognition is actually dependent on that timeline.
Same applies to situational state recognition in driving. E.g., to derive speed, acceleration and direction of objects.
No, in this context (pun intended) of speech recognition, the context means external context, i.e., understanding lots of information about the topic of that speech, knowing what would the speaker might plausibly be trying to say, what real world entities might be involved, and how are they called/spelled - all kinds of information that is not included in the original audio data, things that the listener would know based on life experience.
Im skeptical you can do real autonomous driving without context. In DC, we have roads that flow one direction part of the day and another direction the other part. We've got roads that will be shut down unpredictably when there is a diplomatic event. We've got constant construction, where a two lane road might be reduced to one lane with a human holding a sign or using hand signals to usher cars through on their turns. How does a self-driving car handle that without understanding context?
But you, as a human, don't need to know what happened before, to be able and drive through those conditions. You just need the immediately-available information, which is what OP defined as "no need for context" vs speech recognition.
How does a self driving car differentiate between a police officer flagging you down and a carjacker? Humans can make this judgement because we have a context for each of those situations based on our understanding of how the world works outside of just operating a vehicle safely.
To be fair, humans can't do this either, or there wouldn't be such a thing as carjackers.
One of the biggest challenges that automated systems face is that the acceptable failure rate for them is far below the acceptable failure rate for humans in the same role. To err is human...
It would not be too costly to augment roads with electronic signalling devices which give information to the software in cars. This information can do what signboards or traffic signals do for humans.
The difficult part - when there is an error in these signals, or things shut down, autonomous cars will suffer much bigger problems than human driven cars.
Its these edge cases that are the problem. We already rely on such mechanisms for planes(information comes from both gorund control and on-flight radar). But a lot of care and resources are is required to get to $n 9's level of reliability.
For a self driving car, it's sufficient if its understanding of weird situations is limited to detecting "yep, I'm in a weird situation, I'm going to stop now and wait for a remote human to take control".
The situations you describe are rare - I've once had a diplomatic event that required weird rerouting and twice had cases where traffic was regulated by hand signals due to some crash on the road, but that means just a few cases over a whole lifetime. A system that can't solve these cases but recognizes them as unsolvable is a quite acceptable automated system if it can delegate control to a human inside or a remote dispatcher, which isn't that hard to do.
These situations aren't rare at all. Just in the last few months, I've had: change in left-turn traffic pattern leaving my office onto a major road; humans guiding traffic on 2-3 separate occasions (very common around July 4th); cones dramatically changing lane patterns in construction zones; two occasions of cops blocking off a road with their cars to let a motorcade pass.
And this is just me driving (i.e. my car is parked 90% of the day). If you're talking about a self-driving Uber in D.C., one of the above events will happen on a daily basis.
I imagine traffic in D.C. is incredibly atypical compared to most US cities. Issues common to drivers in D.C. are probably very rare to most drivers in the US, including those in other major metropolitan areas.
I live in New England and the poster's description of DC traffic sounds like what I see all the time, from cities like Boston or even small towns like East Longmeadow.
Your mention of humans guiding traffic reminds me of advice from my father many years ago: never assume that a human giving you direction when you're driving is giving you good information. Always evaluate whether what they're conveying to you may be either misunderstood (e.g. what does "wave" mean??) or just plain false. Hard for a human to do.
>"yep, I'm in a weird situation, I'm going to stop now and wait for a remote human to take control".
Not that's not sufficient.
If ten people did that in a critical area during a high demand hour it would be a news story and there would be criminal charges depending on the details.
If you redefine "sufficient" to include stopping your car on the George Washington bridge because it's confused by a construction zone it still doesn't solve the backup you cause.
We actually now understand that passing control to a human is seriously dangerous - unless you can do it a minute or so in advance, the human has to switch context from being a passive passenger (or, more likely, actively engrossed in something else) into being an active driver very quickly. Everything up to level 4 automation will cause accidents when the car attempts to hand control to a human.
Of course, there's a very reasonable argument that e.g. level 3 automation might cause fewer accidents overall, even if it kills people when it has no idea what to do, but convincing Joe Public that such a car with such a known flaw is safe is another matter.
The exceptional cases can often be the most important, especially when you are talking about moving humans from place to place. I wonder how a self-driving car would react to something like driving in a hurricane?
For a few more very common examples of humans using context:
A human will spot a person wearing headphones and recognize that person has a low situational awareness. The computer doesn't come close to even having the optical resolution to do that if the AI was perfect - remember human vision is 570+ megapixels, even a 4K video stream is literally two orders of magnitude lower.
[Now think about the fact that if we built a camera capable of recording 400 megapixels, you'd currently need to schlep around a ~750 lbs 25 node cluster, consuming about 50 horsepower to feed it with electricity, just to be able to process the video stream at 25 fps. Moore's law aint' growing that fast these days, so matching the resolution of human vision is not a realistic option.]
Another example is kids. How does the AI recognize that the 5'1" 30-year-old woman has much better awareness and can be treated differently from the 5'2" 12-year-old boy? Humans can spot that difference even from behind.
How about recognizing an adult who is drunk? Or a blind person? Mourners at a funeral, or fans celebrating after a football game? Or a million other conditions that significantly affect pedestrian situational awareness that human drivers will instantly infer from context?
What will happen when kids figure out they can stop a driverless car on its way to collect its owner just by standing in the street in front of it? They'll have a lot of fun, for sure.
How about when carjackers figure out the same? That they can dress up like construction workers, stop the car in the street, tow it onto a flatbed with built-in RF jammer and head straight for their underground chop shop? There goes your cheaper insurance.
If the self-driving proponents were to say that human drivers would be banned, and all vehicles would therefore be self-driving I'd have more respect for their arguments. That said, you still need to account for things like pedestrians and snow. If we're talking about "self-driving on the I-5 when it isn't raining and there are no human drivers permitted" then yes I think we're probably close..
But that number is a calculation of the maximum resolving power of the human eye filled across a 120 degree field of view. The fovea is the only portion of the retina that actually attains that acuity and it encompasses roughly 2 degrees in the center of the retina.
There are roughly 120 million rod cells and 6 million cone cells in the retina. The rod cells for color vision and cone cells for low light. As each individual rod cell is primarily sensitive to one of red, green or blue they match fairly well to the rgb channels of a pixel. So the eye could be considered to provide data roughly equivalent to a 40 megapixel color image and grayscale 6 megapixel. So ~5 times a 4k image.
Edit: And even that actually over estimates the amount of data the brain is actually processing. A 4k 60 fps video is handled by 6Gbps and the human optic nerve only has roughly 8.75Mbps of bandwidth.
A lot of processing is done "on the road", starting from ganglia in the retina itself; so a direct comparison with 4k 60fps video is grossly incorrect.
Granted, but there is arguably a lot more money behind autonomous driving than there ever was behind speech recognition and synthesis. It was a different problem space, but the situation is probably comparable to the space race - the winner will dominate a new market of hundreds of billions in size. The motivation is really really strong, and with it comes lots of funding and talent.
A little more context on the speech recognition part: I remember going to a trade show (many years ago) where a representative from one of the major speech recognition companies was showing off the technology. He wasn't just dictating text, but was also using the software to control the computer. I was floored by it. I was 100% convinced we had arrived at that dream of intelligent voice-computer interaction.
I've felt the same way after watching various autonomous driving demos (like Tesla's[1]). But then I remember my experience of actually trying the same speech recognition software myself; under unrehearsed real-world conditions with edge cases and human mistakes, the technology performed terribly. Any "intelligence" I'd seen in the demo was essentially smoke and mirrors.
Granted, current autonomous vehicle technology incorporates a lot more artificial intelligence than those speech recognition demos ever did, but the challenge is also significantly greater (and more life-critical). Sure, your sensors and cameras might be able to read signage correctly in 99% of conditions, but when there's graffiti on the sign at night during heavy rain, all bets are off. The human driver may have difficulty also, but the human driver has real intelligence and life experience in a variety of domains, enabling them to make inferences based on more than just statistical probabilities.
In my opinion, prior to solving all the edge cases at an acceptable level using AI, we'll solve a different problem allowing us to sidestep many of those challenges; we'll incrementally start building (and converting to) smart roads where only smart vehicles are allowed. Obviously it won't be everywhere, but the most important routes will be covered, and your safety on those routes will be much higher than it would be on traditional roads with AI or human drivers.
Possibly. I think it's more likely that systems get good enough that you can turn over control to them on certain limited access roads. Perhaps construction zones are required to transmit some sort of alert signal that requires a driver to take over within 60 seconds or something along those lines.
This could be a big improvement for both safety and driving comfort and seems as if it would be a much more amenable to solving over, say, a 10 year horizon than a cross-town Manhattan taxi ride at rush hour.
> Sure, your sensors and cameras might be able to read signage correctly in 99% of conditions, but when there's graffiti on the sign at night during heavy rain, all bets are off.
In the vast majority of the cases, that sign will have been seen by another autonomous car on a sunny day before it had graffiti, and stored in the map database. You should be comparing a whole fleet of autonomous vehicles learning from each other versus an individual human.
"I was floored by it. I was 100% convinced we had arrived at that dream of intelligent voice-computer interaction."
I remember talking to Apple's Kim Silverman, ¿head? of speech recognition, somewhere in the end of the '90s. He said you had to spend a few hundreds of hours training the recognizer to get at a good level. That's not a lot more than one would spend learning to touch type at speed, but a large fraction of _that_ time is productive; training a speech recognized wasn't.
Also, touch typing, once you can do it, works everywhere; speech recognition didn't work as well when there was background noise or echos. So, few people were willing to invest the time. And they probably were right.
I also agree with your statement that we will solve restricted, but useful domains first, and that a general solution for autonomous driving will be a long way of.
I also agree roads will be adapted to the self-driving cars. It isn't rocket science to embed a steel wire in the center of each car lane that a robot car can detect, for example.
I remember a television program where the engineers of some European car manufacturer thought universally usable self-driving cars would be safer than human driven ones in 10 years or so, but their human factors specialist said it would be something like 40 years before they would hit the road because she knew you can't expect humans to fully attend to driving on very short notice for a few seconds each week.
(that show also said, and showed, that features such as lane assist are more limited than the cars are capable of because of human factors)
Intercontinental planes are mostly autonomous. Trains are autonomous. Massive mine dump trucks are fully autonomous. These things are in use today and has been for almost a decade. They need not be perfect to be useful, they only need to be better than the average human driver.
Toyota is way behind in the game. What do you expect them to say about the competition?
Besides, trains run on tracks, each of which accomodates one train at any given time and rarely intersect; additionally, there's all sorts of signalling and safety infrastructure embedded with the track...
In stark contrast with lane-swerving cars, intersections everywhere, temporary rerouting because of roadworks &c.
Running trains must be orders of magnitude simpler than fully autonomous vehicles.
As for planes, they have a massive benefit in being engineered and maintained for levels of reliability no car can ever hope to achieve, having an awful lot of empty air around them (not to mention being able to move in the Z plane, too, to avoid collisions.
I don't doubt there are many lessons to be learned by designers of autonomous vehicles from work already put down in the fields of trains and planes - however, I'd argue they are very different problems.
This is a highly disingenious use of the word autonomous. Airplane "autopilots" are given a set of waypoints and will fly a straight line between those. That's been possible with cars since (at least) the 1960s too; today it's easy enough that you could give it as an end-of-term project for a bunch of robotics undergrads. It's also completely useless outside of very specialized situations like the mining dump trucks you mention.
Do you ever wonder why there are two pilots on every commercial airliner? It's not because airlines like paying so many more pilots- notice how fast flight engineer/navigator positions disappeared as flight management technology and GPS made them unnecessary. But still two pilots are there on every commercial flight, why?
Simple answer: the airline industry has learned, through a lot of lessons in blood over decades, that even with automation you really need a pilot who is always ready to intervene right this second, not in three minutes after they've mentally caught up to what the situations is. And in order to provide that sort of guarantee over many hours, you need two pilots, so they can switch off responsibility. That's because commercial airliners on cruise, way above any animals, terrain, etc. still have situations where immediate human intervention is safety critical. (And when pilot minds fall behind the power curve the result is things like AF447, so it seems like the industry is right about the importance of this human monitoring and intervention.)
So if this well studied, easier externals problem requires someone on ready for immediate intervention at all times, how quickly do you think that a much harder problem like driving is going to get solved sufficient to allow human free driving?
All of those things have a team of engineers maintaining the hardware and configuration of each installation, and operate "on rails" (either GPS-defined or literal) in very carefully controlled environments.
It's like saying that we've had factory robotics for decades and so we should soon have useful robotic housekeepers (in the general sense, not the Roomba sense).
I agree with you, We all hope we'll have amazing self-driving cars real soon, it's just funny watching the responses here. If you try to say autonomous cars may not be a perfect solution and available in 6 months, or SpaceX may not have a thriving colony on Mars in 6 years, then you will ruffle some feathers. I think guys that have lived through disappointing hype before have some perspective to share.
> I can't help but think about speech recognition 20 years ago. Many of the hot software packages claimed something like 96% accuracy, and that sounded great on paper. People thought intelligent voice-computer interfaces were just around the corner, yet here we are in 2017, and Siri/Alexa/Cortana are barely becoming usable (but still frustratingly lacking in many situations).
Computer speech recognition now has a greater accuracy than human speech recognition, but the types of use cases for talking to a computer are much less sensitive to error. Short snippets and phrases that are designed to get something done right now.
Compare ordering paper towels with Alexa, to asking a friend for a paper towel when your hands are messy. If your friend mishears you, they can just look over, see your dirty hands, and figure out you probably asked for a paper towel. Alexa has no such benefit, computers are held to a much higher bar.
> People thought intelligent voice-computer interfaces were just around the corner,
I would argue it isn't recognition accuracy that makes complex scenarios hard. I've seen context aware recognition being rolled out for keyboards recently (Android's keyboard does it now days, it'll correct the past word based on what the next word is), and it seems like Android does the same for speech reco, based at least on observed behavior during use, but I'm just guessing.
The real hard part is making computers smart enough to do useful things.
We are a long way off from
"Book tickets at The Altair for between 6pm and 7:30pm next Friday and add the reservation to my calendar and my wife's calendar"
being possible in all but a few contrived scenarios. (Now that said, the above scenario is getting easier and easier if you ignore voice, ML is good at figuring out emails that have schedules in them, and forwarding them to other people is now simple, everything is much better than 5 years ago!)
Isn't that kind of the point though? Yes, computers can hear better than humans, but they fail miserably at understanding the meaning of what they are hearing and responding with the appropriate actions. This is exactly the same issue that is/will be faced with self driving/driverless cars.
>Computer speech recognition now has a greater accuracy than human speech recognition, but the types of use cases for talking to a computer are much less sensitive to error. Short snippets and phrases that are designed to get something done right now.
You just said that speech recognition doesn't work in general. In general is exactly where self-driving has to work. If we just needed self-driving trains, we'd have them already*
*We do have them, but strangely there isn't much demand since human train drivers appear to do an ok job most of the time.
> You just said that speech recognition doesn't work in general.
I replied that the accuracy is high. It turns out, in retrospect, that accuracy and usefulness are two different things.
In regards to automated driving, achieving better than human isn't all that hard. Heck backup cameras with little "warning zone" lines on them are better than human for the one particular task of backing up. Cruise control systems that maintain distance are better than humans. We are incrementally getting there. The progress is much different than with voice reco.
The problem with voice reco isn't recognition technology, that works fine, it is with having computers understand what the hell to do with the voice. Contrast that to driving, where the end goal is easy to list - Rule 1: Don't hit anything. Rule 2: Get to the destination. (Rule 1 is the hard part, Rule 2 99.9% solved!)
With Voice Reco, we got the transcription part down, but... now what? In the example I gave up above, knowing how to make a reservation is painful thanks to market fragmentation (not everything is an API), and people generally don't go and tag all of their contacts with their relationship status. I happen to have my wife under "wife" and my mother under "mother", that simple step alone gives me much more natural usage of voice input.
Then when I saw "OK Google, directions to my Mother's house", well, that still doesn't work for a thousand little reasons[1], even though each and every word was correctly transcribed. (I get a nice Google search instead!)
The set of situations that can happen while driving is far smaller than the set of interactions that can happen over voice when users expect a natural interface. Yes driving is really complicated, but it is possible to get a group together and after a day or so, brainstorm everything that could happen on the road out to 2 standard deviations of likelihood.
It might take an hour+ just to list all the ways someone might ask for directions.
[1] Mainly because no one programmed it to understand that particular way of making a request. It is annoying because I can ask it for directions to home, and that works fine, so I just figured directions to "Contact Name's House" would also work. Of course if I lived in NYC I'd expect "Contact Name's Apartment" to work!
> The problem with voice reco isn't recognition technology, that works fine, it is with having computers understand what the hell to do with the voice.
If I type "remind me at 9am to water the plants", it does the right thing 100% of the time.
If I say "OK, Google, remind me at 9am to water the plants"...not so much.
If it just got "remind me at 9am" 100% of the time, I'd be satisfied. The fact that"water the plants" becomes "watering lamps" doesn't matter--I can figure out what I meant.
But it doesn't transcribe "remind me at 9am" correctly. It knows what to do with that text, but it doesn't get that far reliably enough.
Maybe I haven't tried saying this to humans enough for comparison, but right now, I'm not satisfied with the transcription of even the most common phrase. (Well over 90% of what I say to my phone starts with "OK, Google, remind me ..."
---
For fun, I just tried reading this out to my phone, and it got that phrase all three times:
> If I type remind me at 9 a.m. to water the plants it does the right thing 100% of time if I say OK Google remind me at 9 a.m. to water plants not so much if it just got remind me at 9 a.m. 100% of time I'd be satisfied the fact that water the plants becomes watering clamps doesn't matter I can figure out what I meant but it
Not bad. It cut off partway through, and maybe I need to speak my punctuation, but it's less error prone than normal. Helps to speak very clearly, unsurprisingly.
Every time this gets brought up here people scoff. Nobody seems to be willing to notice that tech companies have for quite a while now been misrepresenting the pace of innovation in areas that not-coincidentally are attempts at diversifying revenue streams by hitting new tech home runs. See also: Glass, any of the other nonsense coming out of Google X (Loon? Cyber contacts?), drone delivery, digital assistants, and so forth. Not to say cool stuff isn't happening in all these fields but we are not just a few short years away from the tech being viable.
I think instrumenting the roads in one way or another will go a long way to enable real autonomous vehicles. Sure it's expensive, but on the upside it might actually work in this century. We just need to define a standard and let the free market go to work.
Until we get a very generalized form of AI, I think cities will have to be remodeled to assist self-driving cars. Places like NYC ( manhattan ) can be easily converted into a self-driving car city given it's grid-like layout and I can easily see china building cities around the idea of self-driving cars.
I don't think the technical aspects are as difficult for the 99.99% of the cases. It is the 0.01% of unknowns that will be difficult to overcome. But we can minimize that by modifying or building cities specifically for self-driving cars.
The most difficult part I think is the legal/regulatory issues.
This is why I see self-driving cars taking off in china/asia first before the US. They'll probably limit it to city limits initially. And when the technology is mature enough, broaden it to the entire country.
I absolutely disagree. Self-driving cars are complex, but not unmanageable.
> What people don't realize is that full autonomous driving will require more than just faster/smaller/cheaper technical innovation - it will require the refinement of innovations that probably haven't even been invented/researched yet.
Like what? Object detection/recognition? Better sensors? Path/trajectory prediction? Vehicle control? Sensors and processing/unifying their data? The hardest part is letting all these building work together, and then produce a mass-production product from it, but we do have the base technologies, companies like Google, Tesla and Volvo have proven already that.
The problem comes down to software, and Toyota has the problem that it's a hardware-focused company. In the car-industry, just like in any industry, software will become more and more important, and Toyota - just like many other car-companies - simply doesn't have an answer here since they don't understand it.
The crux here is not really about semi-automation for highway driving in personal cars, but about companies that can only hope to live up to their investors' hype if they get rid of human costs, like Uber. And the idea that the capabilities of a car-for-hire could be fully automated to the point of removing any human from the service is utterly ridiculous to anyone who has ever seriously considered what that problem entails. Navigating a vehicle from point X to point Y via chaotic and ever-changing city streets and highways is the easy half of that problem.
I have to wonder what motivates him to admit this, surely hyping his technology is a formula for attracting funding, right? Why admit your job has such a long-time until return?
the hype around it is fun, but the reality is that autonomous driving is harder than anything we've done before as a species and we have a bunch of the best minds working on it already (as the article notes).
rockets and electric cars are really cool, but those have largely been engineering challenges. autonomous vehicles are in the basic scientific research stage. i can't imagine we'll see true autonomous vehicles within the next 20 years.
i say this not to discourage the efforts to build this wonderful technology, but only to temper expectations, so that we avoid implausible conclusions like "uber will save itself by replacing its fleet with autonomous cars in the next 5 years".
> autonomous vehicles are in the basic scientific research stage.
Autonomous vehicles have been virtually out of the "basic scientific research stage" ever since Darpa's Grand Challenge in 2005 and 2007.
While there is still a ton of research being done, most of it has been focused on refining existing solutions, or applying existing solutions into new problem spaces. For the most part, we know what is needed both in hardware and software to gain Level 5 capability, and for the most part we have working examples of all those functions.
What is left is refinement.
I also believe that Tesla is on the right path by using cameras for it's main sensor input (augmented by radar, and probably some LIDAR too). We already know that our road system can be navigated visually - it's inherently designed for this. In theory, radar and LIDAR shouldn't be needed at all, but just as they are useful augmentations for humans (in those areas that they are used - such as proximity detection on some vehicles), so they are also useful for self-driving systems.
SDC's could be very disruptive to the the traditional automotive business model of selling a personal car to each adult who needs transport. Autonomous hire cars mean cheaper transport for users and and potentially a much smaller market for new cars.
While we have traditional manufacturers on both sides of the fence, I'm more inclined to believe companies like Ford who are running towards an autonomous future despite the effect it will have on the way they do business.
While I don't have any example articles, I do know there have been a few written on the reality that millennials and the following generation are much less likely to own their own vehicle or to even drive. This has been attributed to a number of various factors (a large portion being attributed to those generations widespread and intense usage of various social media platforms).
So manufacturers are already feeling the pinch, so to speak.
While (as you put it) "autonomous hire cars" will make the market for new car sales (and car ownership in general) smaller, I don't think it will completely go away, for a couple of reasons.
First off, such form of "public transportation" is still subject to "the drunk person who pukes in the seat" syndrome. You will quickly get vehicles on the road that are virtually rolling trash cans, and when you encounter one, you'll have to make the choice to either use it anyhow, or send it back and wait for another to come (potentially making you late to whatever it is you are doing).
Secondly, a service of "hire out your personal car" might become something; kinda like driving for uber, but not actually driving. Make money with your car when you aren't using it. Of course, you could also find your car trashed in the process (I'm sure AirBnB suffers from a similar problem).
Ford and other companies are just likely hedging their bets, plus whatever tech they do develop can also bring them a tidy sum by licensing it forward, or selling it off.
I think Toyota exists in a different dimension than the rest of us. Toyota is so convinced that the future of cars is hydrogen that their electric car efforts boil down to a four man team. Dinosaurs! As for self driving tech wasn't there a successful multi manufacturer trans Europe truck convoy last year? Self-driving tech is already at work at mines. Decades? Really, Toyota.
We'll, are you so sure that solving hydrogen delivery and storage is ultimately harder than making safe and cheap batteries for practical cars with an autonomy over 600km?
People have a habit of thinking linearly, however technology builds on top of technology (leading to exponential improvements).
Half way through the Genome project, they had only mapped out 1% of the genome. A lot of people in that field and working on that project were thinking the same thing - "not even close". But the next 7 years, they completed the next 99%. This is the power of exponential progress.
The only thing that is going to halt this progress is to hit physical limits and not find a new way of doing things to get around the physical limits. New technology opens new possibilities.
Technological curves tend to look exponential at first, then logarithmic. We went from first manned flight to first supersonic manned flight in just 44 years. Seventy years later we still don't have supersonic commercial flights. In the energy sector, the thermal efficiency of coal plants improved dramatically from 1880 to 1950, then pretty much hasn't improved at all since then. The number of years someone can expect to survive after qualifying for Social Security at age 65 has gone up only about five years since the age was first set to 65 back in 1935.
Futurists have a habit of thinking exponentially for things that ultimately end up with asymptotic progress.
Driving cars is ultimately about learning and learning is asymptotic. A child typically progresses from not knowing how to control their vocal cords to 50% of their peak vocabulary within 5 years. It takes several times that amount of time to be able to capably work a white collar job (a feat many fail at), and decades more to be able to make a meaningful and attributable contribution to society (something almost all fail at).
The genome project is a terrible example because mapping progress is a second order effect to the actual technological progress of learning how to map the genome. It would be reasonable to project that once we know how to build a level 5 car that AVs will consume 100% of miles travelled within a very short period of time. But learning how to get to level 5 could very well be asymptotic in the same way that it takes decades of asymptotic refinement to turn an exponentially learning toddler into a PhD.
It's worth asking which problems fall into the set of those that can eventually benefit of exponential progress, and why. This reminds me of Taleb's distinction between mediocristan and extremistan, or non-scalable randomness vs scalable randomness.
Level 5 is way out there. But at least from a technological standpoint, if Waymo isn't already at a minimum viable product for L4 commercial deployment, they're pretty damn close. Deploying 600 vehicles is a huge financial commitment, they wouldn't be doing it if they felt their self driving car project would remain in the science experiment phase for another decade. They were ready to roll in 2015 with their Koala cars, driving slowly on Mountainview roads, but were derailed by regulations.
Completely agree with this. It will be ridiculously difficult to get to Level 5 automation in cars. It's decades away, not years. Ironically though, part of the reason it's so difficult is that during the transition the roads will still be populated with human-driven cars. If we passed laws to the effect that "as of date X, no human-driven vehicles or pedestrians will be allowed on roads in the set {Y}", we could get to Level 5 much faster.
> It will be ridiculously difficult to get to Level 5 automation in cars. It's decades away, not years.
One could make the convincing argument that Waymo's vehicles are already at Level 5; where they probably struggle (I have no examples or data on this) is probably in inclement conditions (rain, snow, fog, etc). That said, even in such conditions, they probably perform much better than a human driver.
For instance, most human drivers in such conditions - even when they struggle to see the road clearly - continue to drive anyhow, mostly blind, instead of doing the right thing and pulling off to the side of the road and waiting, which I bet is a behavior that Waymo's vehicle performs when it struggles beyond a certain level.
In other words, Waymo's vehicle is likely better at determining when NOT to drive, and acting on that determination, instead of being stubborn and irrational in the face of evidence to the contrary.
I agree. The other day I was driving on the highway and ahead of me was a pulled-over pickup and tons of personal belongings strewn all over the road. Obviously the stuff had blown out of the truck. The driver was on foot, dodging cars while trying to retrieve his cargo. I was able to slow down and thread my way through the junk while avoiding hitting him -- or making myself a target for the cars behind me --
but what would an automated system have done? Probably its best choice would have been to pull over and stop because it didn't know what the heck was going on or how to deal with it.
I've always had a hunch (and mind you, I have no proof of this, I'm just talking here) that the software required for a level 5 self driving car is, on some mathematical level, equivalent to the software needed to pass the Turing Test (w/out hacking the rules). In other words, if we have the capacity to do one of those things, we will have the capacity to do the other (in much the same way that seemingly unrelated problems in theoretical CS are actually restatements of the same problem). It's probably BS, but it's an interesting idea that I can't quite seem to shake.
So are you saying Waymo's cars are capable of passing the Turing Test? Because they are arguably at level 5 capability.
I highly doubt it, though.
The thing is, while I believe they are already at this level, their technology is obviously not perfect. It never will be. Perfection in any space of technology is just not possible, because perfection is not possible in the physical world for anything. Perfection is a mathematical abstract at best.
Instead, what we can hope to achieve is "better than human drivers", and Waymo's vehicles have certainly achieved that, imho. They have logged more miles with their self-driving vehicle technology than most people will drive any single car of their own, with an accident rate far below that of the average human driver.
For the accidents they have had, most of them were when the car was in manual mode, and for the rest, they were low-speed incidents. None of them, that occurred while in self-driving mode (that I am aware of) caused any injury or death to any occupant of the vehicle.
I'll leave this comment with this video, to show what was capable of autonomous vehicles in what now seems like the distant past (and I honestly know it wasn't, but time sure flies with this tech):
Note that this was in 2010, using the tech in Stanford's Junior car (tech that would eventually lead to Google's and later Waymo's vehicles), and far from perfect; I can guarantee you that the systems used are much more advanced and better at control today.
Waymo's self driving tech is definitely not anywhere near level 5 capability (if that were true, they'd rule the world right now and we'd all be using their tech). From Waymo's Wikipedia article:
>As of 28 August 2014, according to Computer World Google's self-driving cars were in fact unable to use about 99% of US roads.[57] As of the same date, the latest prototype had not been tested in heavy rain or snow due to safety concerns.[58] Because the cars rely primarily on pre-programmed route data, they do not obey temporary traffic lights and, in some situations, revert to a slower "extra cautious" mode in complex unmapped intersections. The vehicle has difficulty identifying when objects, such as trash and light debris, are harmless, causing the vehicle to veer unnecessarily. Additionally, the lidar technology cannot spot some potholes or discern when humans, such as a police officer, are signaling the car to stop.[59] Google projects plan on having these issues fixed by 2020.[60]
It's really not better than human drivers, and I don't know where this myth comes from.
Does anyone have concrete evidence on an over-under date for something commercial, e.g., availability of self-driving (or at least a self-delivering) taxi? The most specific hint at a number I've seen recently was in the neighborhood of 2022, which comes from a recent Morgan Stanley report on Waymo: "The analysts believe Waymo can get to an operating profit by 2022 and reach 8% margins by 2030."
There is actually no reason to predict this to be years away except for like just future shock or something. Which is the big thing holding it back is just general disbelief.
Waymo is actually testing for real in Arizona with selected people from the public.
Yes, they include an employee in the car to monitor or take over if necessary. But from the numbers I saw they reported to the DMV like a year ago, with Waymo's cars that happens surprisingly little. https://www.dmv.ca.gov/portal/wcm/connect/946b3502-c959-4e3b... Like very, very little. 635,868 miles and 124 reportable disengages. As of last year. Significantly improved now. In the rare case of a problem they can send a human to help.
If they had legal clearance or a waiver or something, they could TODAY just take some routes and times that had little traffic and not include the test/backup employee. Again, the number of times they report the driver needs to take over now is minuscule. They could then start charging for the rides. Then that would be a commercial deployment.
For them to make that widely available, at first for selected low traffic areas and times to reduce the risk of any incident or needing to send a human driver, is just a matter of scaling up their fleet and the legal issues.
Tesla is pushing very hard to the point of being unsafe to get their autopilot to actually work in as many circumstances as possible. They literally have been selling self-driving cars. The cars are collecting massive amounts of real-world data. They have hired many genius AI experts. Unless there are too many crashes or the company goes down in flames (which is possible considering how aggressively they are testing and releasing software even with issues), they will push to decrease the amount of driving the human has to do down to close to 0 as soon as is possible. Musk will try as hard as he can to get 75% to 100% there before the end of the year, because that is literally what he has promised. By the end of next year his actual conservative expectation is to be more than 80%.
As far as operating profit who knows, but there is no way that we will be waiting until 2022 for this to be deployed, at least in somewhat limited routes and times.
The tricky part about a wager is that it gives people a strong incentive to reinterpret events to their advantage, and its tough to define this in an objective way. Even if it is defined objectively there is ambiguity in language and again they may not realize they disagree on what they are wagering about.
But maybe in a few days when I get paid again I could make a wager if there was some agreement on what we were wagering. I have a feeling that once that got pinned down you wouldn't want to continue the wager.
2021 is four years from now. Do you really think that, in the next four years, this early rider program https://waymo.com/apply/ won't progress to allowing rides without the supervisor employee present on some rides? Once they have that working on a regular basis, should we not assume they will start charging for rides? They could actually do that now for certain routes and times if they had the government sign off on it. So it is feasible (although unlikely) that you could lose the bet tomorrow.
There are several other advanced self-driving vehicle programs out there including Cruise, Uber, etc. They are also using the LIDAR technology and detailed maps etc. Audi has announced they will have a 'level 4' self pilot mode for freeways in 2020-2021 https://media.audiusa.com/models/piloted-driving They announced that in 2017 (or maybe 2016) because they have been partnering with nVidia and saw nVidia actually demonstrated deep neural network autonomous driving. https://www.youtube.com/watch?v=fmVWLr0X1Sk
You seem to have a more liberal interpretation of "somewhat limited" than I do.
I have ascribed a 45% chance to this statement, and last year bet $500 against it taking place:
"By July 2023, a self-driving car can be reliably hailed by a member of the general public in at least 10 North American cities. At least 8 cities must be outside the San Fransisco Bay Area. The car must available on at least 50% of days, i.e., not confined to very narrow weather or traffic situations. No back-up human may be physically present to take over in an emergency."
50% of days in any city means it can handle Minnesota snow and ice.
As you have stated it, implying the car can go anywhere at any time (no constraints on routes mentioned) and is available in most major cities, that's very speculative that it would be deployed to that degree.
It will be deployed within a few years although with some reasonable constraints and it will be extremely useful even with those constraints.
Edit: Actually, looking at that statement again, some of what I read as implied is actually just ambiguous. I would make the bet if I could correct the ambiguous parts to be more realistic.
This will probably have a chilling effect on VCs investing in this space. Maybe not White Walker chilling but it can't be seen as enthusiasm coming from the world's largest car manufacturer. I worked for a router company and our likely exit scenario Cisco leaned ever so slightly in the other direction and the VCs pulled our funding. Poof.
This is a migration problem. A network of only self driving cars is effectively a more complex train without tracks. This does not require an AI, just a very reliable and coordinated state machine. A mixed set of AI and human drivers is much more complex. There is a possible future where towns, countries , certain highways, etc adapt to the needs/demands of humans such that the vehicles and rules of the road combined create the desired environment for autonomous vehicles to thrive. Everyone assumes technology will adapt to us, but history shows that humans do just as much if not more adapting (see: suburbs/highways, industrial revolution, tech addiction, tinder, uber, etc). In many cases this is powerful humans leveraging tech to bend the will and change lives of the masses. Not saying this or good or bad. Just saying... Futurists are generally wrong and overly optimistic (and/or just really good marketers).
First step should be a car with driver assistance so sophisticated, a human driver can't crash it. Seems like a much more tractable problem, and one I personally would pay quite a bit of money for in a few years, when my son is old enough to drive.
For you maybe. Redundancy in my case comes at a price: kids are about $300k+ a pop nowadays between ages 0 and 18, assuming upper middle class lifestyle. More if you pay for college. That being the case, I'd be willing to pay double the price for a car that verifiably makes it darn near impossible for the driver to kill himself.
The Roman Rule: The one who says it can't be done shouldn't interrupt the one doing it.
I agree that it's harder than, say, Musk or Waymo's teams want to admit. But it also seems to me like Toyota has a bit of sour grapes, everyone around them is doing interesting, useful things, and Toyota is stuck at lane departure and adaptive cruise control and blind spot detection, all of which work pretty well for them.
To be honest, Toyota's cars and the systems in them feel rock solid. I don't get the same confidence from, say, Tesla. Have a look at YouTube videos of Tesla's cars doing scary stuff like trying to drive into the median suddenly. It's nowhere near ready for public use. If they were selling the same number of cars as Toyota, there'd be a lot more accidents happening.
Toyota got into autonomous driving late. Then they hired Gil Pratt from MIT. Pratt used to run the MIT Leg Lab, which Raibert ran before Boston Dynamics, but Pratt didn't do much there.
It's clear that autonomous driving can be made to work, because Google/Waymo is doing it. It's hard and expensive and it takes a lot of sensors. It also takes extensive testing. Waymo drives 25,000 autonomous miles a week. Volvo has level 3 working on some freeways in Sweden; their 100 users are not required to watch the road while in auto mode.
There are other startups trying to do it. There's the "fake it til you make it" crowd - Otto and Cruise. (Otto's highly publicized Budweiser truck delivery demo was on a nearly deserted freeway surrounded by chase cars.) There's the "it's just a small matter of software" Tesla approach. There's the "throw machine learning at vision and hope" crowd, some small startups. That's what you get if you take the Udacity course and start coding. 43 companies have California DMV licenses for autonomous vehicle testing.
Toyota has been making some bad business decisions lately. They don't make battery electric cars. (They're fixing that, but won't be shipping until 2022.[1]) Instead, they've been pushing cars that run on hydrogen. Toyota sells the Mirai in California, and has a few hydrogen stations so it can be refueled. They sell about a hundred cars a month.[2]
Of all the companies you mentioned, only Toyota is a top ten automaker, and is indeed number one by many measures. I guess you think experience and a track record of phenomenal success is a disadvantage. I could agree with you in some contexts, but not this one. The autonomous car hype is the emperor’s new clothes, and Toyota is not your average industrial behemoth. Those guys literally wrote the playbook on nimble manufacturing.
Toyota and other car companies are fantastic at building cars, yes. But autonomous driving is at its core a challenging software problem. No major car manufacturer outside of perhaps Tesla has a track record of shipping software at scale. They're hardware companies.
Your sentiment here is like saying Samsung is great at making phones so they must be great at Mobile OS's. They're fundamentally separate things, and we frequently see that proficiency in one area doesn't necessarily translate to proficiency in another.
Toyota and other car companies are fantastic at building cars, yes.
No Toyota is fantastic at designing, building, marketing, managing logistics, integrating credit systems etc...for cars.
Selling a Level 5 automated car is as much marketing as it is software. It could work perfectly but if nobody trusts it enough to buy, at a low enough price point, then it won't matter.
It's hard to sell something you don't have, and Toyota doesn't have competitive self driving tech or electric car tech.
Arguing that they will be a leader in self driving cars because they are leader is selling legacy cars very much smacks of the "PC guys aren't going to just walk in" comment regarding smartphones.
There's an old saying that goes something like:
X + computer = computer
So
Teletype + computer = computer
VDU + computer = computer
Phone + computer = computer
Computers and IT driven companies are eating whole industries.
Unless they have a huge cultural shift in engineering toward safety and best practices I certainly wouldn't trust them with my life. read through the Barr report on the acceleration incidents and you will see all the untrustworthy engineering going on there.
When Toyota talks about "full autonomous driving", they really mean fully autonomous in a safe and predictable manner. When Uber or Tesla says the same, the implication is that a "good enough" level will be good enough - and that level can be achieved much sooner than whatever Toyota is talking about.
If drivers pedal errors lead to unintended acceleration in one car, but somehow don't cause unintended accelerations in all the other cars, then maybe there is something wring with how pedal reacts.
I listened to that episode, and sort of regret it. I've heard these arguments, and they still aren't convincing. Have you read the article on their spaghetti code?
To be clear, I believe that most cases of unintended acceleration are caused by pedal confusion/floormats. Just because that's true of most accidents, that does not mean it's true of all of them.
Maybe it's because I work in the embedded space, but I've seen code written as indicated and terrible code like that is not reliable. It might work 99.999% of the time, but given 100,000 units, and a problem might occur every couple weeks. There is nothing magical about automotive software.
I've noticed when a Toyota does pretty much anything, and it's in the customer's hands; it just does the job.
They learned long ago a satisfied customer is the best advertising. Yes, they still need to advertise, like all companies. The average consumer still cares about cup holders, over a reliable engine, or powertrain.
I have a weird feeling they are farther along with fully autonomous driving than they claim.
They made their reputation on reliability. They didn't come out in the 80's and say our vechicles will drive you 200,000 miles with not too many garage visits.
Let the other companies produce the first round, and take all the heat for mishaps. Then, if they haven't outlawed them, just sell the customer the vehicle that drives itself flawlessly.
They don't need money to promote their stock. They don't need to finance their research. Why claim they are close to AI driving until the product is perfect?
They could wait a few years, and decimate all competition with a superior product.
The only realm that Apple dominates is smartphones. They're great. I enjoy my iPhone. It also happens that iPhone sales are enough to be the biggest publicly-traded company in the world.
In my mind, “PC market” means personal computing devices of all form factors. Any other definition is splitting hairs to serve your desired outcome.
Anyway, when is the last time software alone got someone from point A to point B? It’s a hell of a lot easier to license software from a provider than it is to outsource the building and shipping of reliable vehicles.
> Of all the companies you mentioned, only Toyota is a top ten automaker, and is indeed number one by many measures.
Sounds a lot like what people were saying about Nokia in 2007. It had near monopoly and was really good at executing pre-iphone smartphones, however they were never as invested in software as Apple or Google were and we all know how this ended.
Hell you can do an apples to apples comparison. Honda said the same thing about both autonomous driving and EV. They are waaaaaaay behind the curve and are hoping that FUD can keep people from buying their competitor's offerings.
From the scanning pattern shown, it's a two-axis mechanically scanned system.
Resonant vibrating mirror? Mirror galvos? (I've played around with those.)
Those work fine but have frame rate limitations. The article talks about "switching the system into high gear". You can trade off resolution for frame rate and may also be able to change the field of view. That complicates the imaging system; now you need gaze management. Not a bad idea, but I think the no-moving-parts solutions will win in the end.
Where did you get the information that the current Volvo experiment is at level 3? I tried looking at the project home page, but couldn't find any info about it. For their commercial offering they are apparently going straight to level 4, skipping level 3, so running level 3 tests seems odd:
http://auto2xtech.com/volvo-to-skip-level-3-autonomous-mode/
I just don't understand Toyota's product strategy here. They are like Apple in car manufacturing (not comparing in design but in reliability and sheer quantity of products). So I'd assume they'd understood the demand for the electric cars for some time now and would have come up with electric cars which have great reliability and are cheap. But all they have done is introduced Mirai which only sells 100 a month and are costly. Very un-Toyota like! It seems that the management is not at all interested in anything except the Hybrid and gasoline powered models.
They produced and delivered a popular, well received electric car and have been selling them since 2010.
I assumed we would, after a few years, see electric Jukes and 350z and an Infiniti model. Maybe a electric GT-R as a "halo" model. Something ? Anything ?
Instead, seven years went by and they have managed to (almost) release a second-generation Leaf.
Indeed, very disappointing. Even bigger threat for these Japanese automakers is becoming irrelevant once the car companies which don't have ride-sharing platform of their own start becoming just a commodity and wafer-thin-margin businesses. They need to pivot hard in future or else the Teslas and Waymos will overthrow them in a decade or so.
They also have the "e-power" platform, of which the Note version is supposed to be selling well, and a Juke version has been shown in concept form.
This is, basically, the Leaf setup but with a much smaller battery and a range extending generator. Currently you can't even plug it in, but it does give higher mpg and better driving experience around urban areas compared with an ICE car.
They're in an innovator's dilemma because they (all the established players) have a huge advantage in drivetrain/engine technology (which is very difficult to replicate) vs any newcomer that they would be giving up if/when the market moves to full electric and motors.
They've been selling partial-electric cars since at least 2014. The Prius Prime allows for electric-only driving up to about 30 miles per charge. That's enough for my commute. I got 3500 miles on my first tank of gas.
Toyota has put a small battery and electric motor in their cars for 20 years now capable of propelling the vehicle at low speeds.
But if you're trying to build an actual electric vehicle, you don't want a small battery as that puts you on the worst part of the discharge rate and battery life curve. A larger battery in the same vehicle reduces the "C" rate and is much kinder on the battery plus gives you much more power to work with and ability to handle a much higher charge rate.
Big battery wins, long-term. Small battery is a false economy except on hybrids.
I think the lack of appreciation of this (and lack of availability of inexpensive batteries) has hampered a lot of carmakers. Address that (super cheap batteries so you can put a 500 mile battery in if you want), and virtually every "problem" with electric cars goes away or is dramatically reduced.
> Toyota got into autonomous driving late. Then they hired Gil Pratt from MIT. Pratt used to run the MIT Leg Lab, which Raibert ran before Boston Dynamics, but Pratt didn't do much there.
They've hired other folks, too, who have real-world autonomous vehicle experience.
Google/Waymo are definitely leading the pack, but I don't think we'll see Level 5 in a production vehicle for at least five years, probably closer to 10.
I read the recent article about their test facility (https://www.theatlantic.com/technology/archive/2017/08/insid...), but 1. It's still California weather and 2. I've routinely seen worse intersections. What do you do in a rotary where you have to cut across multiple lanes of traffic to exit? If the autonomous vehicle waits for another car to give way, it will be there forever.
> What do you do in a rotary where you have to cut across multiple lanes of traffic to exit? If the autonomous vehicle waits for another car to give way, it will be there forever.
What do you do in a multi-lane rotary roundabout with trams going through it, and construction happening? (an actual use case from Bremen, Germany).
There’s so many situations they’re not even considering.
Google/Waymo is building a Schönwetterauto, in the metaphorical and literal meaning.
I, for one, think the hydrogen instead of EV gamble will pay off big time. Especially in Japan where they just can't produce enough electricity to go full EV, what with shutting down most of their nuclear plants. H2 is a much more realistic zero emission energy carrier for them.
Also, as electricity demand and price goes up, and gas demand and price goes down, oil companies will quickly pivot to producing H2 by steam methane reforming with carbon capture and storage, to continue the returns on their billions of dollars of investments.
who cares if toyota doesn't make battery electric cars in mass yet? what works best in america today is hybrids, and they were ahead of the game there. probably the best decision they made was to go all in on hybrid before battery-only. they know range is super important and the mirai can go farther than almost any electric car.
i really dont see autonomous driving as much of a race when the "race" is going to take 20-30 years to complete. cycles that long give everyone a chance to catch up and leapfrog each other
> [Toyota] don't make battery electric cars. (They're fixing that, but won't be shipping until 2022.)
2022 is a good year if they come up with good tech. At the moment the market for EV's is small, tech is new and only Tesla really shows of something (they have to, because EV's are their only business). All big car makers are researching, from new diesels (double injector SCR's) to EV's & autonomus driving. And if they don't (or go too slow for some), then 1st tier suppliers like Bosch & Continental are doing it.
I can't speak to their autonomous technology, but Toyota has always been a leader in electric vehicles, from the early hybrids, to partnering with Tesla, to impressive long term improvements in fuel cells even after the U.S. Gov. Moved on to batteries.
Arguably Toyota has more electric vehicle prowess than any other company. It is relatively small potatoes to change the source to batteries manufactured by Japanese companies.
> Toyota has been making some bad business decisions lately. They don't make battery electric cars. (They're fixing that, but won't be shipping until 2022.[1]) Instead, they've been pushing cars that run on hydrogen. Toyota sells the Mirai in California, and has a few hydrogen stations so it can be refueled. They sell about a hundred cars a month.
I can't speak to their autonomous technology, but Toyota has always been a leader in electric vehicles, from the early hybrids, to partnering with Tesla, to impressive long term improvements in fuel cells even after the U.S. Gov. Moved on to batteries.
Arguably Toyota has more electric vehicle prowess than any other company. It is relatively small potatoes to change the source from open system cells to closed cells, especially when many of battery manufactures are also Japanese.
Something that I think is sometimes missed is that the insurance companies along with major freight haulers may drive this more than the public at large. Self driving systems don't have to be perfect; they just have to be better than humans, and that may be attainable sooner than the naysayers imagine. We're still north of 30k auto deaths per year in the U.S. alone - I'd bet money that if you waved a magic wand and replaced every car on the road today with an autopilot Tesla, traffic deaths would fall off a cliff.
It is quite a logical thing though. 90% full autonomous driving is not. And the first 90% of autonomous driving could very well be within reach of our software capabilities at the moment (or maybe 80%, but it will not change the discussion much).
You can only release it when you reach 100%, and if that last fraction takes the next 50 years then we really are not even close.
Seems like the curse of the 80/20 rule. Cars that can drive themselves, monitor their surroundings using cameras, react to unpredictable moving obstacles etc are pretty impressive. That's the 80%. The other 20% is about things like bad weather (especially snow, which can completely cover road markings and signs) and "no win" situations. It's not hard to see a close-enough-to-perfect system taking the other 80% of the time.
Pratt says, "Historically human beings have shown zero tolerance for injury or death caused by flaws in a machine"
Now imagine an autonomous car that had an accident rate one fifth the rate of human drivers. Would society really decide not to allow such a car on the road?
I will start paying attention to self-driving hype when they make cars that can drive themselves in Alaskan snowstorm. Until then it's just California-only decoy for companies that make their money on shit smart people don't want to work on.
"We've learned and struggled for a few years here
figuring out how to make a decent phone, PC guys
are not going to just figure this out.
They're not going to just walk in."
- Palm's CEO Ed Colligan on the persistent rumors
that Apple will be introducing a Apple phone
in the near future (2006)
reply