Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> AI with good enough training

This is the crux of the issue. If we had AI that could outperform human drivers and make our roads safer then I'd agree that we should use it. But I see no indication that this is the case, or that it's even technologically possible at this point.

Tesla can make whatever claims they want, but until their self driving systems have been independently and rigorously tested they shouldn't be anywhere near public roads. Have such tests happened yet? The article doesn't mention any, so I assume that they haven't.



sort by: page size:

> The Tesla FSD system on the other side aims to be a generic driving AI. ... can it ever get so reliable, that the driver can completely hand over to car ever?

Using the current machine learning algorithms? Not likely. Imagine releasing a new airplane and telling the pilot - "this airplane flies well in 99% of scenarios, but 1% of the time it refuses to respond to pilot commands and kills everyone on board. We don't know why it happens but we'll keep training the model". That's the state we're currently at with "AI"; as someone put it, it's alchemy. If it works, we don't know why. If it doesn't, we don't know why - but we tune the parameters until it seems to work. We cannot prove it will work for every possible input. ... Not exactly a scientific approach by any means. Such self-driving cars might be accepted in USA but I imagine they'd quickly get banned in Europe after a few random fatalities.


> AI is already as good as human drivers

But the AI drives slowly and gets confused easily. Regular drivers routinely have to go around self-driving cars. Not to say they won't improve, but it seems like current AI is assistive to the point where it might be harmful when drivers rely on it in speeds and situations where they shouldn't. I'm sure it will keep improving, but I feel like this is one of those situations where the amount of data and training required, and the amount of iteration on the software required to handle edge cases is not impossible but is exceptionally difficult.


> I disagree; they appear to need general spacial and physics AI, but they don’t need general linguistic or emotional intelligence.

I don't think it will require general AI, but it will require a pretty good model of what is in the mind of each person in the road, specially pedestrians and cyclists. Simulating other minds in general requires general AI, but maybe we can get away with a facsimile.

I say that because self driving needs to adapt to unpredictable stuff happening in the road and this might require a higher understanding of the intentions of other people.

For example, when I'm riding bicycles, I have a rule that if a car stops for me to cross I need to establish visual contact with the driver first. It's too dangerous otherwise. Can AI establish visual contact with me or have some other signaling to pedestrians and cyclists? (now I see this specific example is simple; I want a light on top of self driving cars that will reliably signal some stuff like "I stopped for you to cross")

Another situation is when the road is partially used by business and people, partially used by cars. Some roads are pedestrian-first and it may be hard for even an human driver unfamiliar with its dynamics to traverse.

People keep saying that everything about driving can be done with "dumb" AI using essentially iterative current methods. I used to think that way, but it isn't working out today. Right now, AI is severely overpromising and overstretching itself. Maybe this will lead to an AI winter and we will only get self-driving cars with new methods. Or maybe, after some years, we will get good enough self-driving cars.


> than your home computer can handle, much less the components in Tesla cars.

Ehm a recent Tesla has very powerful specialized hardware, developed in-house, which is much more powerful than a commodity home PC with expensive GPUs. You might want to do some basic research before you make such claims...

> The system isn't AI based... It's currently based on static programming

A large neural network is used to make many of the decisions, as well as to run the perception of the sensors/cameras. Unclear what you mean by "AI", but neural networks are usually covered in the "AI" term by researchers and engineers in this field.

> may never be flawlessness enough to make error free updates

It seems that you are new in this argument, because this point has been rebutted so many times in the past, but here goes: it does not have to be flawless, it just has to statistically cause fewer accidents per kilometer/year than what human drivers cause in a similar situation (which is already non-zero, and in some countries actually quite high).


> roads have deteriorated to the point where it can be difficult to spot road markings even as an experience driver

Yes, but apparently self-driving systems are doing a worse job than humans. But this only means the AI isn't quite there yet. I say AI, not sensors, because their vision is probably already better than human's.


> City streets will never be unless we get AI with general intelligence.

Driving is likely a much easier problem than artificial general intelligence. Even when it involves evading rampaging toddlers. Sure, if you want perfect safety, you'll need the car to anticipate a great deal —far more than humans currently do, like looking at the walkways as well as the drive lanes.

If we merely want something that's safer than a human driver that's probably nothing more than a (huge) engineering feat.


> “Driving” is solved. Driving with humans on the road - doing unpredictable human things - is far off still.

Plus there's serious questions about liability with self driving cars which are still unresolved in most of the world - if the goal is to have vehicles operate themselves with no human supervision, who goes to jail when they kill someone? Despite all of the progress that's been made with AI it's mostly been in low-stakes problems where failure isn't a big deal, so we don't have a consensus on what we're supposed to do when a neural network negligently obliterates a person because some logistics company wanted to save a few bucks on driver salaries.


> vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data

Throwing data at the problem isn't going to solve it. Only people without expertise in AI think that's how it works.


>Using the current approach, we can create AIs with the ability to do any task at the level of the best humans — and some tasks much better. Achieving this requires training on large amounts of high-quality data.

Wrong. Full self driving is still not here despite access to a huge amount of high quality data.


> instead of just fencing off the highways

Well, if you did that, you probably wouldn't even need AI for the control system. If you know the parameters and have some control over unknowns, you want a deterministic control system. "AI" will hopefully handle the cases where you are in an uncontrolled environment. Current "AI" doesn't seem quite up to the task of self-driving. It will probably get there, but when is anyone's guess.


> AI is not capable of explaining why it makes a decision that it does.

So, yes this is true, but also not the full story. At this point we don't have neural networks that are also capable of explaining their reasoning, but what we CAN do is do a lot of introspection with that network. There is an entire field called AI Explainability that seeks to probe the network in various ways to help humans understand what is happening. Remember that you have total control over the network, and you can run inference thousands of times, or run pieces of the network, or feed test data in to the network.

I am a casual observer of the field but I see this "AI can't explain itself" thing thrown around a lot by people who don't know about the extensive research being done in explainability.

Also Tesla has a massive testing infrastructure that checks their network for regressions. So they will know if it suddenly starts failing in some area before they release it. Obviously this is new and complex tech so it is not perfect.

But I think self driving is important for their business, and they are probably investing heavily in both batteries and AI. And fully self driving electric taxis could eliminate the need for many people to own an ICE car at all.


> How is it that humans can learn to drive a car in about 20 hours of practice with very little supervision, while fully autonomous driving still eludes our best AI systems trained with thousands of hours of data from human drivers?

Wow, are these so-called ai scientists really that daft?

Sure on paper we drive 20-40 hours "practicing" then hit the roads, but we've been back-seat driving and driving via video games or TV since the day we're born.

While a 2-year old doesn't know the intricacies of driving my 3-year old can definitely yell hey dad the lights red slow down.

Full-immersive life experience. Perhaps ai's need to be put into a real-world "birth" simulation (perhaps we already are this experiment) to learn as they grow.

The problem I see is the narrowness, if you're training on a narrow subset then you're gonna get narrow results. I don't know the best way of doing it but you need to start thinking of an ai's "brain" like that of a child's and how it absorbs things - the human brain is remarkable sure, but I don't doubt it's duplicatable in silicon.


> When driving AI has been produced that can provably outperform me in the rain, in ice and snow, and on the track, I'll consider stepping aside. That day hasn't arrived and isn't likely to any time soon.

This is a different argument from the one in the article. You are arguing that you will outperform the AI and therefore should remain the driver. And you want proof to change your mind.

Both are reasonable and I agree.

The author isn't demanding proof that it is statistically better before switching. They object to giving up control and view that as a moral imperative, "no matter how “safe” the computerized features are."


> Human drivers rely on hundreds of hours of training to safely and reliably handle most traffic situations

Human training is different than AI training; they are so different that the words should not be the same. The difference is that humans see things how they are, not how they appear. If a human learns to drive only during the day, and then after their training has to drive at night, they will still recognize the traffic lights and stop signs--an AI will not. Moreover, a human really doesn't need hundreds of hours to learn to drive. I was driving at 5 years old, in a toy Jeep with a top speed of maybe 10 miles and hour. I knew not to drive into the flower garden or hit the dog. In many States, you can drive legally as early as 14, and historically many farm kids started younger than that with no training to speak of. When your vehicle is a slow tractor, it's not nearly as dangerous.

> We also get brain farts or fall asleep at the wheel and flatten the occasional pedestrian.

Great, this is something an algorithm that mimics the human brain would be able to improve upon.

> According to the National Safety Council, the chances of dying from a motor vehicle crash is 1 in 103

Over what time frame, presumably a lifetime of about 75 years? A very meaningless and alarmist statistic--better to talk about the 1.1 fatalities per 100 million miles.


> it is an almost acceptable introduction for the layman to the divergence between deterministic, classic AI and in-focus, out of ANN functions AI.

I agree with that. And if all that the article would do there would be nothing wrong with that.

But it goes further and makes sweeping statements about the state of self driving cars in general.

It talks about using machine learning to classify stop signs, and then writes: "Similar techniques are used to train self-driving cars to operate in traffic."

Which might be true for some systems, but not for others.

If you read this article you would think that present day self-driving cars are just a bunch of machine learning hooked up to a steering wheel. There are systems like that, but many people question their safety. Most other systems follow a hybrid approach, where there are machine learning components surrounded by old fashioned robotics. Leveraging the strong suits of both approaches.


>This means autonomous vehicles might make a mistake that is practically impossible for a human to make.

Exactly—the mistakes are newsworthy because they're unusual (both in specifics and the general novelty of "AI did X"), and not because (exaggerations aside) they're categorically any worse than the mistakes humans make.

We are 100% guaranteed that humans aren't perfect drivers. A headline where we "discover" an AI is imperfect, therefore, doesn't let us conclude that AI is worse than human drivers.

On the other hand, autonomous systems can benefit from direct learning transfer through shared software, whereas humans cannot. In this respect AI is categorically superior to human drivers.


>I'd love to know what the sticking points are

probably the long tail of unique problems that any autonomous system in an open environment faces.

It's kind of the crux of all these ML learning based systems. By definition you can't learn what's totally new and absent from data, but that is precisely where the intelligence lies when it comes to human drivers. Which is why I think genuine self-driving without fancy geofencing or whatever is almost an AI-hard problem.


> Coverage area doesn't somehow require intelligence

Coverage area is required for practical self-driving solutions, and today's ML based efforts are not close to producing it.

If you measure intelligence in this context by the ability to drive on any roads anywhere, then humans are beating the crap out of "AI".


> The problem is the machine being able to predict what pedestrians and other drivers are going to do.

That's something machine learning can do pretty well, actually. There has been a lot of research on this topic, and not applied just to self driving - natural language generation, stock prediction, playing games with multiple agents, even content recommendation do future prediction based on past information.

next

Legal | privacy