>how many years of experience is needed to drive on public roads without supervision
If, as I strongly suspect, full self-driving requires artificial general intelligence, Waymo's algorithms will not get there no matter how long they run simulations or even how many real road tests they do.
>Using the current approach, we can create AIs with the ability to do any task at the level of the best humans — and some tasks much better. Achieving this requires training on large amounts of high-quality data.
Wrong. Full self driving is still not here despite access to a huge amount of high quality data.
This is the crux of the issue. If we had AI that could outperform human drivers and make our roads safer then I'd agree that we should use it. But I see no indication that this is the case, or that it's even technologically possible at this point.
Tesla can make whatever claims they want, but until their self driving systems have been independently and rigorously tested they shouldn't be anywhere near public roads. Have such tests happened yet? The article doesn't mention any, so I assume that they haven't.
> A human can learn to drive a car in 10's of hours, while AI's still require millions of hours of training, and they still end up inferior.
I think it's a bit unrealistic to compare these numbers. A human learning to drive has usually started with 16-18 years of learning how the world works before the 10 hours of transfer learning.
I'm a software engineer at Waymo, speaking for myself.
I agree with the premise of the article. Humans are remarkably good at driving, and making a computer better will be hard.
Where I may disagree with the author is, I think the project is both feasible and worth doing.
It's worth doing because we drive a remarkable amount. Even at human safety level, more than a million people die every year in car crashes.
As far as feasibility, Waymo is, today, running a fully autonomous, no-human-behind-the-wheel ride hailing service in Arizona. It's not a dead-end demo, it's there. I know, a Phoenix suburb is not the same as NYC or Mumbai. Trust me, I know -- probably better than you! Nonetheless, what we accomplished was impossible a few years ago.
The history of AI is a history of moving the goalposts and then blowing past them. AlphaGo solves what was considered one of the hardest problems in AI, but now that I see how you did it, eh, it's not that impressive. GPT-3 passes the Turing Test and can almost pass a coding phone screen, but come on, nobody really thought that a machine that passed TT would be intelligent. It was just a few years ago that respectable people were saying we'd never solve protein folding. Now that we have, the goalposts have moved again. Don't worry, we'll blow past those, too. Except this time people will yawn.
So give us some time. This too will happen slowly, and then all at once.
> I disagree; they appear to need general spacial and physics AI, but they don’t need general linguistic or emotional intelligence.
I don't think it will require general AI, but it will require a pretty good model of what is in the mind of each person in the road, specially pedestrians and cyclists. Simulating other minds in general requires general AI, but maybe we can get away with a facsimile.
I say that because self driving needs to adapt to unpredictable stuff happening in the road and this might require a higher understanding of the intentions of other people.
For example, when I'm riding bicycles, I have a rule that if a car stops for me to cross I need to establish visual contact with the driver first. It's too dangerous otherwise. Can AI establish visual contact with me or have some other signaling to pedestrians and cyclists? (now I see this specific example is simple; I want a light on top of self driving cars that will reliably signal some stuff like "I stopped for you to cross")
Another situation is when the road is partially used by business and people, partially used by cars. Some roads are pedestrian-first and it may be hard for even an human driver unfamiliar with its dynamics to traverse.
People keep saying that everything about driving can be done with "dumb" AI using essentially iterative current methods. I used to think that way, but it isn't working out today. Right now, AI is severely overpromising and overstretching itself. Maybe this will lead to an AI winter and we will only get self-driving cars with new methods. Or maybe, after some years, we will get good enough self-driving cars.
> Human drivers rely on hundreds of hours of training to safely and reliably handle most traffic situations
Human training is different than AI training; they are so different that the words should not be the same. The difference is that humans see things how they are, not how they appear. If a human learns to drive only during the day, and then after their training has to drive at night, they will still recognize the traffic lights and stop signs--an AI will not. Moreover, a human really doesn't need hundreds of hours to learn to drive. I was driving at 5 years old, in a toy Jeep with a top speed of maybe 10 miles and hour. I knew not to drive into the flower garden or hit the dog. In many States, you can drive legally as early as 14, and historically many farm kids started younger than that with no training to speak of. When your vehicle is a slow tractor, it's not nearly as dangerous.
> We also get brain farts or fall asleep at the wheel and flatten the occasional pedestrian.
Great, this is something an algorithm that mimics the human brain would be able to improve upon.
> According to the National Safety Council, the chances of dying from a motor vehicle crash is 1 in 103
Over what time frame, presumably a lifetime of about 75 years? A very meaningless and alarmist statistic--better to talk about the 1.1 fatalities per 100 million miles.
But the AI drives slowly and gets confused easily. Regular drivers routinely have to go around self-driving cars. Not to say they won't improve, but it seems like current AI is assistive to the point where it might be harmful when drivers rely on it in speeds and situations where they shouldn't. I'm sure it will keep improving, but I feel like this is one of those situations where the amount of data and training required, and the amount of iteration on the software required to handle edge cases is not impossible but is exceptionally difficult.
> City streets will never be unless we get AI with general intelligence.
Driving is likely a much easier problem than artificial general intelligence. Even when it involves evading rampaging toddlers. Sure, if you want perfect safety, you'll need the car to anticipate a great deal —far more than humans currently do, like looking at the walkways as well as the drive lanes.
If we merely want something that's safer than a human driver that's probably nothing more than a (huge) engineering feat.
> How is it that humans can learn to drive a car in about 20 hours of practice with very little supervision, while fully autonomous driving still eludes our best AI systems trained with thousands of hours of data from human drivers?
Wow, are these so-called ai scientists really that daft?
Sure on paper we drive 20-40 hours "practicing" then hit the roads, but we've been back-seat driving and driving via video games or TV since the day we're born.
While a 2-year old doesn't know the intricacies of driving my 3-year old can definitely yell hey dad the lights red slow down.
Full-immersive life experience. Perhaps ai's need to be put into a real-world "birth" simulation (perhaps we already are this experiment) to learn as they grow.
The problem I see is the narrowness, if you're training on a narrow subset then you're gonna get narrow results. I don't know the best way of doing it but you need to start thinking of an ai's "brain" like that of a child's and how it absorbs things - the human brain is remarkable sure, but I don't doubt it's duplicatable in silicon.
probably the long tail of unique problems that any autonomous system in an open environment faces.
It's kind of the crux of all these ML learning based systems. By definition you can't learn what's totally new and absent from data, but that is precisely where the intelligence lies when it comes to human drivers. Which is why I think genuine self-driving without fancy geofencing or whatever is almost an AI-hard problem.
> roads have deteriorated to the point where it can be difficult to spot road markings even as an experience driver
Yes, but apparently self-driving systems are doing a worse job than humans. But this only means the AI isn't quite there yet. I say AI, not sensors, because their vision is probably already better than human's.
> with RL learning where the AI is free to explore new actions to collect data about how well they work
Self driving cars aren’t free to explore new actions. That would be frightening. Self driving cars use a limited form of AI to recognise the world around them, but the rules that decide what they do with that information are simple algorithms.
Well, if you did that, you probably wouldn't even need AI for the control system. If you know the parameters and have some control over unknowns, you want a deterministic control system. "AI" will hopefully handle the cases where you are in an uncontrolled environment. Current "AI" doesn't seem quite up to the task of self-driving. It will probably get there, but when is anyone's guess.
> This article is about cars driving in simulation, not on the real streets. But I do wonder how they can introduce truly "unpredictable" events in the simulations.
Like normal SW:
Requirement: navigate this road with those signs.
Test: Put the same signs as in requirement. Check that car navigates correctly.
> I don't know anything about self driving cars, but it seems like solving self driving cars isn't far off having general artificial intelligence. How well does a car react when there is a construction flagger telling it to turn around, or when there is no signs and the traffic lights go down due to a power failure?
It either continues or stops. When the traffic lights go off it uses the computed right of way.
> Also, my recent experience trying to order a chipotle burrito from an AI assistant over the phone did not leave me too dazzled with the state of the AI industry.
It's the same for self driving cars. It is like every SW designed after a set of requirements: you feed it with something not specified in requirements and whatch it crash or spitting an error message.
He learnt the field and know what can be done and what cannot be done. Go lookup the history of Ai with lisp and then deep blue vs Karpersky. Back then people believe humanoid like Star Trek Data is possible within our lifetime. If you have studied the AI of its day you would know it is near impossible. The OP above you is right, based on current ML knowledge self-driving car is impossible. Assisted self-driving car is more likely. The problem of self-driving car isn't like chess or go. There isnt simplified rules. Humans and humans needs are messy. What you're thinking of is some new technology say super LIDAR++ and new method say doubly deep organic learning might solve the problem. But as of now, they are not yet invented or extremely infant stage. But one thing for sure those aren't the current knowledge ML. Perhaps Waymo will get there...perhaps no. Many decades ago, there are tesla-like cars but it was successful because of battery tech and cheap oil price. They basically went out of business. So Waymo maybe be like them, full of possibility but the isn't ripe yet. Just like Friendster and MySpace before FB.
> vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data
Throwing data at the problem isn't going to solve it. Only people without expertise in AI think that's how it works.
If, as I strongly suspect, full self-driving requires artificial general intelligence, Waymo's algorithms will not get there no matter how long they run simulations or even how many real road tests they do.
reply