Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> it is an almost acceptable introduction for the layman to the divergence between deterministic, classic AI and in-focus, out of ANN functions AI.

I agree with that. And if all that the article would do there would be nothing wrong with that.

But it goes further and makes sweeping statements about the state of self driving cars in general.

It talks about using machine learning to classify stop signs, and then writes: "Similar techniques are used to train self-driving cars to operate in traffic."

Which might be true for some systems, but not for others.

If you read this article you would think that present day self-driving cars are just a bunch of machine learning hooked up to a steering wheel. There are systems like that, but many people question their safety. Most other systems follow a hybrid approach, where there are machine learning components surrounded by old fashioned robotics. Leveraging the strong suits of both approaches.



sort by: page size:

> That car driving itself is not one big AI brain.

> Instead of a whole toolchain of specialized models, all connected with normal code — such as models for computer vision to find and identify objects, predictive decision-making, anticipating the actions of others, or natural language processing for understanding voice commands — all of these specialized models are combined with tons of just normal code and logic that creates the end result — a car that can drive itself.

Or, as I like to say it: what we now call "AI" actually refers to the "dumb" part (which does not mean easy or simple!) of the system. When we speak of an intelligent human driver, we do not mean that they are able to differentiate between a stop sign and a pigeon, or understand when their partner asks them to "please stop by the bakery on the way home" -- we mean that they know what decision to take based on this data in order to have the best trip possible. That is, we refer to the part done with "tons of normal code", as the article puts it.

Needless to say, I am not impressed by the predictions of "AI singularity" and whatever other nonsense AI evangelists try to make us believe.


>> I'm curious why author decided to completely disregard reinforcement learning.

Most probably because the vast majority of its applications is in game-playing and not human-level AI (as in general intelligence, capable of tackling free-form tasks in the real world), which is the subject of the article.

To substantiate this intuition further:

a) Agents playing arcade games, Go, chess, etc, are still constrained in "blocks worlds", even if they can beat humans in those worlds.

b) To my knowledge, self-driving car systems actually deployed are not trained using reinforcement learning, or even deep learning (for anything other than vision) but instead make use of the SLAM approach discussed in the article. I might be wrong on this and would welcome a correction.


> I disagree; they appear to need general spacial and physics AI, but they don’t need general linguistic or emotional intelligence.

I don't think it will require general AI, but it will require a pretty good model of what is in the mind of each person in the road, specially pedestrians and cyclists. Simulating other minds in general requires general AI, but maybe we can get away with a facsimile.

I say that because self driving needs to adapt to unpredictable stuff happening in the road and this might require a higher understanding of the intentions of other people.

For example, when I'm riding bicycles, I have a rule that if a car stops for me to cross I need to establish visual contact with the driver first. It's too dangerous otherwise. Can AI establish visual contact with me or have some other signaling to pedestrians and cyclists? (now I see this specific example is simple; I want a light on top of self driving cars that will reliably signal some stuff like "I stopped for you to cross")

Another situation is when the road is partially used by business and people, partially used by cars. Some roads are pedestrian-first and it may be hard for even an human driver unfamiliar with its dynamics to traverse.

People keep saying that everything about driving can be done with "dumb" AI using essentially iterative current methods. I used to think that way, but it isn't working out today. Right now, AI is severely overpromising and overstretching itself. Maybe this will lead to an AI winter and we will only get self-driving cars with new methods. Or maybe, after some years, we will get good enough self-driving cars.


> with RL learning where the AI is free to explore new actions to collect data about how well they work

Self driving cars aren’t free to explore new actions. That would be frightening. Self driving cars use a limited form of AI to recognise the world around them, but the rules that decide what they do with that information are simple algorithms.


> I think a car is going to need a Theory-of-Mind to navigate complex social driving environments

As someone with a lot of experience in self-driving cars, my opinion has changed over the course of the last decade from "we can create smart enough models of these separate problems to create a statistically safer product" to "first you need to invent general AI."

It becomes immediately obvious as you encounter more and more edge cases, but you would never even begin to think of these edge cases and you have no idea how to handle them (even when hard coding them on individual cases - and you couldn't anyway as there are too many) so you realize the car actually has to be able to think. What's worse, it has to be able to think far enough into the future to anticipate anything that could end badly.

The most interesting part of self-driving cars is definitely on the prediction teams - their job is to predict the world however many seconds into the future and incorporate that into the path planning. As you can guess, the car often predicts the future incorrectly. It's just a ridiculously hard problem. I think the current toolbox of ML is just woefully, completely, entirely inadequate to tackle this monster.


> AI is already as good as human drivers

But the AI drives slowly and gets confused easily. Regular drivers routinely have to go around self-driving cars. Not to say they won't improve, but it seems like current AI is assistive to the point where it might be harmful when drivers rely on it in speeds and situations where they shouldn't. I'm sure it will keep improving, but I feel like this is one of those situations where the amount of data and training required, and the amount of iteration on the software required to handle edge cases is not impossible but is exceptionally difficult.


"Fernandes explains that self-driving cars use multiple sensors and algorithms and don't make decisions on any single machine-learning model." For now, maybe, but, from what I have heard, there is push from the car manufacturers to consolidate on one type of sensor feed as much as possible, to minimize costs. Which implies just that - that the wanted state is one type of sensor, as cheap as possible + one model that is expected to tell the absolute truth. Which is stupid of course, but I assume that the cost-cutting guys are not aware / willfully ignorant of the pitfalls. On another note, the article conflates machine learning / neural networks with AI, I expected better of Wired...

> the idea that more is always better is the common misperception that i'm addressing

It's not a misperception in the case of machine learning. The system can easily discard data that isn't useful at a given point in time. When it is determined to be useful, it's great to have it there.

I'd suggest trying out some machine learning yourself to get a good understanding of why more data is better. Tutorials for Kaggle's Titanic competition can be a good primer.

> if you've ever tried to design a system with multiple, sometimes conflicting inputs, complexity is a real cost to think about (not to mention financial cost).

Not when the system is using machine learning, particularly neural networks.

All the self driving car systems are using deep convolutional NNs, so the complexity of more data is helpful, not a hindrance.

> machine learning is cool, but it's not a panacea. there are plenty of meticulously trained AI's that fail comically in novel situations (like the example in the original article)

The failure in the article was due to lack of diverse sensor input, not because the system was built using machine learning.

You're right that machine learning isn't a panacea. The limits aren't well defined. Still, in the case of self driving cars, deep learning is state of the art. Nobody is hand-coding rules for how to drive the car. It's all about having as much good data as you can gather.


> This article is about cars driving in simulation, not on the real streets. But I do wonder how they can introduce truly "unpredictable" events in the simulations.

Like normal SW: Requirement: navigate this road with those signs. Test: Put the same signs as in requirement. Check that car navigates correctly.

> I don't know anything about self driving cars, but it seems like solving self driving cars isn't far off having general artificial intelligence. How well does a car react when there is a construction flagger telling it to turn around, or when there is no signs and the traffic lights go down due to a power failure?

It either continues or stops. When the traffic lights go off it uses the computed right of way.

> Also, my recent experience trying to order a chipotle burrito from an AI assistant over the phone did not leave me too dazzled with the state of the AI industry.

It's the same for self driving cars. It is like every SW designed after a set of requirements: you feed it with something not specified in requirements and whatch it crash or spitting an error message.


>Using the current approach, we can create AIs with the ability to do any task at the level of the best humans — and some tasks much better. Achieving this requires training on large amounts of high-quality data.

Wrong. Full self driving is still not here despite access to a huge amount of high quality data.


> AI with good enough training

This is the crux of the issue. If we had AI that could outperform human drivers and make our roads safer then I'd agree that we should use it. But I see no indication that this is the case, or that it's even technologically possible at this point.

Tesla can make whatever claims they want, but until their self driving systems have been independently and rigorously tested they shouldn't be anywhere near public roads. Have such tests happened yet? The article doesn't mention any, so I assume that they haven't.


>This means autonomous vehicles might make a mistake that is practically impossible for a human to make.

Exactly—the mistakes are newsworthy because they're unusual (both in specifics and the general novelty of "AI did X"), and not because (exaggerations aside) they're categorically any worse than the mistakes humans make.

We are 100% guaranteed that humans aren't perfect drivers. A headline where we "discover" an AI is imperfect, therefore, doesn't let us conclude that AI is worse than human drivers.

On the other hand, autonomous systems can benefit from direct learning transfer through shared software, whereas humans cannot. In this respect AI is categorically superior to human drivers.


> vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data

Throwing data at the problem isn't going to solve it. Only people without expertise in AI think that's how it works.


> How is it that humans can learn to drive a car in about 20 hours of practice with very little supervision, while fully autonomous driving still eludes our best AI systems trained with thousands of hours of data from human drivers?

Wow, are these so-called ai scientists really that daft?

Sure on paper we drive 20-40 hours "practicing" then hit the roads, but we've been back-seat driving and driving via video games or TV since the day we're born.

While a 2-year old doesn't know the intricacies of driving my 3-year old can definitely yell hey dad the lights red slow down.

Full-immersive life experience. Perhaps ai's need to be put into a real-world "birth" simulation (perhaps we already are this experiment) to learn as they grow.

The problem I see is the narrowness, if you're training on a narrow subset then you're gonna get narrow results. I don't know the best way of doing it but you need to start thinking of an ai's "brain" like that of a child's and how it absorbs things - the human brain is remarkable sure, but I don't doubt it's duplicatable in silicon.


That article’s entire premise falls apart based on a single line you skipped over: If your car can operate only on a few roads, then it's no better than one of those people-mover trains at airports.

Coverage area doesn’t somehow require intelligence and these systems operate on far more than a few roads. The same hardware and software could operate on every road in the US at which it would ‘suddenly be intelligent’ even though nothing changed.

Look people always say whatever AI systems can do isn’t intelligent as soon as they can do it. Chess used to be considered a bastion of human supremacy over machines, then go, now the last gasps of self driving.

In 20 years people will say self driving cars aren’t intelligent because they don’t work in North Korea, even as young slowly people stop learning how to drive the same way humanity largely forgot how to drive a horse and buggy.


> Human drivers rely on hundreds of hours of training to safely and reliably handle most traffic situations

Human training is different than AI training; they are so different that the words should not be the same. The difference is that humans see things how they are, not how they appear. If a human learns to drive only during the day, and then after their training has to drive at night, they will still recognize the traffic lights and stop signs--an AI will not. Moreover, a human really doesn't need hundreds of hours to learn to drive. I was driving at 5 years old, in a toy Jeep with a top speed of maybe 10 miles and hour. I knew not to drive into the flower garden or hit the dog. In many States, you can drive legally as early as 14, and historically many farm kids started younger than that with no training to speak of. When your vehicle is a slow tractor, it's not nearly as dangerous.

> We also get brain farts or fall asleep at the wheel and flatten the occasional pedestrian.

Great, this is something an algorithm that mimics the human brain would be able to improve upon.

> According to the National Safety Council, the chances of dying from a motor vehicle crash is 1 in 103

Over what time frame, presumably a lifetime of about 75 years? A very meaningless and alarmist statistic--better to talk about the 1.1 fatalities per 100 million miles.


> The Tesla FSD system on the other side aims to be a generic driving AI. ... can it ever get so reliable, that the driver can completely hand over to car ever?

Using the current machine learning algorithms? Not likely. Imagine releasing a new airplane and telling the pilot - "this airplane flies well in 99% of scenarios, but 1% of the time it refuses to respond to pilot commands and kills everyone on board. We don't know why it happens but we'll keep training the model". That's the state we're currently at with "AI"; as someone put it, it's alchemy. If it works, we don't know why. If it doesn't, we don't know why - but we tune the parameters until it seems to work. We cannot prove it will work for every possible input. ... Not exactly a scientific approach by any means. Such self-driving cars might be accepted in USA but I imagine they'd quickly get banned in Europe after a few random fatalities.


>It's just following a list made for it, albeit a fairly substantial list.

You're operating on a completely archaic understanding of AI. Modern systems can and do adapt to situations on-the-fly. What's more, they can adapt on aggregate - the experiences gathered by a single car can be shared to all cars. Very quickly, every car in the fleet will have billions of hours of cumulative driving experience.

The overwhelming majority of motor vehicle accidents aren't weird and unpredictable edge cases. They're tragic but mundane events that result from a handful of root causes - inattention, excess speed, poor judgement and unnecessary risk-taking. "Driver/rider failed to look properly" is the key contributory factor in nearly half of road traffic accidents. Computers utterly dominate humans in this respect. For every accident caused by some bizarre and unpredictable set of circumstances, there are thousands caused by someone doing something obviously stupid.

A computer can be programmed to be ultra-cautious in difficult situations. A computer can maintain 100% vigilance 100% of the time. Humans can't. Self-driving cars will undoubtedly fail in new and unexpected ways, but it's abundantly clear that they'll fail much less often than humans.


>I'd love to know what the sticking points are

probably the long tail of unique problems that any autonomous system in an open environment faces.

It's kind of the crux of all these ML learning based systems. By definition you can't learn what's totally new and absent from data, but that is precisely where the intelligence lies when it comes to human drivers. Which is why I think genuine self-driving without fancy geofencing or whatever is almost an AI-hard problem.

next

Legal | privacy