To engage with the substance of his argument a bit more, his argument is that perfection is a long way away:
> “Historically human beings have shown zero tolerance for injury or death caused by flaws in a machine,” Pratt said. “As wonderful as AI is, AI systems are inevitably flawed… We’re not even close to Level 5. It’ll take many years and many more miles, in simulated and real world testing, to achieve the perfection required for level 5 autonomy.”
I can believe that, but I also disagree that we need to be there for it to be useful. Level 4 is enough for large scale deployments.
Lots of people make this argument. I have a counter-argument: when a technology is convenient or appealing enough, people will overlook even very obvious safety/health problems. Consider texting while driving, drinking, smoking/vaping, even driving itself. Cognitively, we know that there are a bunch of things we do that are dangerous, but we do them anyway out of habit or convenience. The argument that assumes we demand "zero tolerance" seems to assume that every person is a run-5k-a-day, celery eating, take-the-bus, machine. That's far from the case. We do tons of things knowing they're bad and risky already.
I think it's far more likely that there would be a three step process for this technology to be adopted: first it's early adopters and everyone else is like "whoa that's too far" (I think this has already happened), then people start to uneasily use it, and when it's good enough they realize "oh holy crap, the machine can drive while I watch Netflix! this is awesome" and then they'll use it constantly. Soon, it just blends into the background. Maybe it isn't 100% safe, but what is?
I don't think you'll get the truth out of focus groups, either. Try to hold a focus group on self driving car adoption and 99% of the people will tell you "only when it's perfect." Probably if you ask the same focus group if they text/eat/put on makeup and drive, 99% of them would tell you "absolutely not" when I'm sure that 100% of them do. I think that once a somewhat good enough assist gets into peoples hands, it won't stop.
For a real-life example, just look up "Tesla autopilot almost crash" on YouTube. There are plenty of people who let Autopilot drive when they really shouldn't.
[0] http://corporatenews.pressroom.toyota.com/article_display.cf...
reply