Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I don’t think a simple “actually, no” is a useful counter argument to his point, especially when he used multiple examples and you just shook your head.

I shook my head because they were similar and equally bad.

> That’s not at all his specific argument about self-driving cars (there were entire sections about the decision making issues he saw), but is the crux of his argument about AI as a whole.

I was talking about the crux of his argument, which starts by saying that we won't get self-driving cars because <start of bad arguments>.

> And there’s a whole article discussing some of those, if you read the parts in the middle. I don’t think the dude’s necessarily right, but I’m also pretty sure you missed a lot of the text in an apparent race to summarily dismiss it.

In my complete reading, the article was geared towards discussing the impossibilities (rather than possibilities) from the POV of someone who hasn't taken any time to understand the cutting edge of research that he is critiquing. That was, in fact, my original point - critique is great, but don't mask personal intuition (potentially uneducated) as science.



sort by: page size:

> Self-driving [AI] has a tendency to go catastrophically wrong for impossible to determine reasons

I don't think that's true. I live in SF and am not aware of a single catastrophe owing to self-driving cars, let alone enough such catastrophes to form a statistical sample from which we could reliably infer tendencies.


>If autonomous cars can drive at the skill level of average drivers - and not get drunk or angry or distracted by forwarding cat videos - then they will significantly outperform humans.

As long as you use the word "if" you can say pretty much anything you want.

>Additional evidence in the form of reason... autonomous drivers have better sensors than humans. Human drivers have limited vision, that only points in one direction at a time.

That is not evidence. It is your own opinion. Nothing of the sort has been proven to work reliably.

>Autonomous drivers won't have that problem. They can see out the back and to the sides, 360 degrees at all times. (I'll note that some of the first deployed autonomous tech was cars that can parallel-park themselves.)

They can't "see". They can only sense what they have been programmed to sense. The envelope of what they can process is tiny. Sensors have flaws, humans programming those sensors have flaws. The human eye and visual cortex has been evolving for millions of years.

>I can go on and on, but really, I'm gonna throw a bullshit card here.

The biggest "bullshit" here is that you are claiming fantastical things that autonomous cars can do that have not been proven in the slightest. You implying that other people are stupid for being apprehensive about unproven tech is quite rich...


> This assumes that self-driving cars will be safer. I think this is an unrealistic idea.

I've seen this suggested a few times, and it makes me wonder if this is caused by religious beliefs or general pessimism.

The reason is that the only way it is an unrealistic idea is if we assume that human intelligence can not possibly be matched by a machine and/or if we assume that the progress towards genera AI will be so slow that we for the intents and purposes of the debate will take very long to match it.

The only thing that will stop us from eventually matching human intelligence with machines is if there is some super-natural soul necessary to match it. Even then, for that to stop us, said soul would need to be a necessary condition to make self-driving cars safer than human, which sounds even more implausible to me, given the many advantages self-driving cars can obtain:

Additional views of the road. Benefiting from accumulated knowledge from billions of hours of driving. Potentially wireless information exchange with the other automated cars in the vicinity (can't see through the fog very well? well, maybe the 10 cars around you can fill in blanks).

I think it's a totally unrealistic idea that self-driving cars won't get to a safety level where human drivers will be outlawed on public roads.


> You know what driverless cars can't do? Redirect a passenger to the emergency room due to acute problems that occur in the car.

You state this as a fact with absolute confidence. That's interesting. Not only do I respectfully disagree, but I'm actually surprised you are so sure of yourself. It seems (to me) rather obvious that self driving cars are not a mature technology yet. In fact, they aren't even deployed widely. In that regard, of course it isn't a mature technology!

So I'd really appreciate it if you could help me understand how you arrived at that conclusion. Being so incredibly confident you must have a rock solid set of arguments about this topic and I'd love to hear them.


>> I said no such thing, you are putting words in my mouth.

I was replying to someone else and you jumped in. You can hold the accusations of putting stuff in places, thank you.

>> What I said is that the baseline for traffic deaths is quite high, and if (or when) autonomous cars are even a little better than humans, then less lives will be lost.

And I said that autonomous cars being better than humans is just a hypothetical.

I'll add that we have no evidence that it's the case. Computers are better at humans at tasks that require fast and accurate retrieval, or computation, but they'll not just magically become better than humans at driving. Somebody has to program them to do that.

So who is going to do that? Do you know how to do that? Do you know anyone who knows how to do that?

The theory is that we'll put enough (real or virtual) cars in enough training situations that they'll learn to drive on their own, but that approach has serious limitations, not least the fact that in the real world there may be an infinite number of situations that a learner will never encounter during training. In any case, with any other cognitive task that we train computers to carry out they end up making completely ridiculous mistakes, which in the case of driving will cost lives.

Until we have a way to develop systems that understand their surroundings, autonomous cars being better drivers than humans is just a dangerous fantasy.


> And yet the architectures he disapproves are the only ones who have shown fully autonomous driving is possible.

They did not.

> There are driverless robotaxis running now, albeit in small areas but expanding.

Running in small areas shows that they can drive in small mapped out areas under specific conditions. Not that we have fully autonomous self driving.


>> There are already videos of assisted-driving cars avoiding accidents

I'm not concerend about assisted driving. I'm concerned about fully autonomous cars being handed complete control in real-world situations.

I'm not just "concerned" about them- they scare the hell out of me.

>> Many being scenarios that Google's self-driving cars have the technology to avoid.

What Google says it programs its cars to do, what it really programs its cars to do and what its cars can really do are all separate things.

The problem is that the current technology level is nowhere near advanced enough to allow fully autonomous vehicles to operate safely. There is a huge number of situations that those cars aren't programmed for, that they can't be programmed for, because those situations are completely unforseeable.

Machine learning has a huge problem with data sparsness. You may train a learner in petabytes and petabytes of data collected from the real world and still miss the vast majority of events that may occur.

That is why, like I say elsewhere, machine learning-based AI makes utterly ludicrous mistakes that humans would never do, even in difficult situations were they can't be expected to perform with 0% error. I've used a few metaphors- here's another one: a human would never mistake a truck for a cucumber. A machine learning algorithm, might.

And what's worse, there's no way to prevent this sort of mistake, or even correct it, because most of the time the models built by such algorithms are simply too complex to be processed by humans in the way a hand-crafted system would be (and goddess knows how hard those can be to process).


> we unconsciously demand AI should be better than average human performance.

Yeah I agree it's an unfair demand.

Especially given how much more powerful human brains are than computers we should perhaps be having a go a humans for not trying hard enough.

The wins of things like Go and Chess by computers has been down played because humans 'only' learned that stuff 100,000 years ago.

Personally I think that driverless cars work better as passive systems that augment humans for the moment rather than the dodgy crossover that is Autopilot. I think that car AIs can be trained to deal with extreme circumstances by running simulations of crashes millions of times over and then they're capable of taking over if the driver ever becomes unwell or hits black ice.

But this is all temporary, as soon as their vision systems match humans they will only ever improve over what we have. This Stanford self-driving car sliding between four perfect donuts is amazing [0].

[0]: https://youtu.be/LDprUza7yT4?t=31m38s


> A lot of us are skeptical of self-driving cars because a lot of their advocates want the next step to be banning private citizens from driving, which you've just confirmed, thanks.

Lol, i think you're mistaking my (attempted) logical thinking towards some type of agenda towards banning humans driving cars. You seem far more agenda driven, heh.

Just because i think we are extremely unqualified for driving, does not mean i want to ban the act. It be extremely improbable to achieve, with all the costs of upgrading the entire infrastructure alone making your defense of this almost seem like a half hearted joke. Moreover, even if in some magic realm where we can make all cars into self driving cars to abolish human driving, i'm just simply not in favor of removing human rights.

Yes, i would want to make it difficult for you to drive, but for your own (and mine!) safety. Our safety standards for driving tests are incredibly, and i mean incredibly, laughable. How often we enforce rules and laws for driving are also laughable. Almost no one even follows some of the most basic rules, like speed limits, and the more extreme people often break sobriety laws.

Self driving cars would give us a platform to A: Raise the bar for who can drive, and what sort of training you need. and B: Be more willing to ban someone from driving if they put others at risk, through speed, drugs, alcohol, or whatever.

We have a hard time banning drivers these days because it very negatively impacts your life if you cannot drive. In a world with self driving cars though, you can still have a first class life. Driving is now something of leisure.

Anyway, nice try at putting words in my mouth - next time make them a little less tinfoil though, please.


> I think the problem we have with self-driving cars is more social than technological at this point.

> it's a hypothetical so give me some leeway on this!

IMO you should not base (and broadcast) your opinions about safety on hypothetical statistics. I don't even believe it's true that overall statistics show self-driving is safer than humans. IIRC prior reports showed that companies were selectively picking statistics about safety.


> the problem is that AI systems are unpredictable, unverifiable, and their correctness is unprovable.

Same can be said about humans.

None of what you listed matters in the real world. Self-driving systems just have to be better than average humans, on average, to be useful.

> boiling it down to 'just needs to be trained properly' is terrifying when peoples lives are at stake

You have no problem with humans being trained to drive. 38.6 thousand people died in car crashes in the US alone last year.


> The truth is, if full self driving cars actually happened, with safety statistics better than humans and with a low barrier to entry, you can bet that people would change their opinion real fast.

I think this is the wrong counter-argument against self-driving cars being viable.

If the median answer is $0 and there were a lot of negative answers then that means there's a lot of positive answers. Just sell the cars to those people!

Not everybody wants a self-driving car just like not everybody wants a big mac. Hell you probably need to pay some people to eat a big mac. But theres enough people that want a big mac that despite that fact it's a huge business.


> When it works most of the time, it lulls you into a false sense of security, and then when it fails, you aren't prepared and you die.

That still doesn't _necessarily_ imply that 'partially self-driving cars' are worse than actually existing humans. Really, anything that's (statistically) better than humans is better, right?

I don't think it's reasonable to think that even 'perfect' self-driving cars would result in literally zero accidents (or even fatalities).


>What evidence do you have that there is zero evidence?

You're really contorting yourself here, just give me an apples to apples comparison of human driving safety to self-driving vehicle safety and I'll be happy. (Hint: none exists)

>That is not a fully autonomous vehicle, but rather an assisted one, one that expects human engagement. In the example case, it failed because of human error.

No no, no word games. That is a Tesla marketing/liability line. What you see in that video is the current state of self-driving. And a product marketed as self-driving which can't steer around a stopped vehicle in the lane isn't really self-driving at all. Besides if what you're talking about is Level 5 type self-driving systems, you have nothing supporting the idea that we get there soon or that current self-driving systems will get us there.

I mean besides if drunk drivers could navigate around that car then you're basically admitting that drunk drivers are still better than the current state of self-driving technology. (something I agree with)

>ABS

...

I think you're misunderstanding me. I'm not against technological progress which makes vehicles safer. I'm against the understanding that self-driving technology is making cars safer.

>Your criticism of Tesla sensors is flawed. You're assuming that their sensors will not get better. That seems an unreasonable and unlikely conclusion. And you're ignoring my point about 360 degree sensor coverage. I'm an excellent driver, but I find myself in danger regularly because I can't see everything.

And you're assuming that their sensors will leap and bound ahead magically. You're trying to make me argue that humans are better drivers than the perfect system that exists in your head. I'm saying the perfect system that exists in your head is irrelevant because it is not going to become reality.


> It’s funny how the older you get the more you realize that when someone starts to spout they’re trying to saving the world, they’re covering up some type of scheme to make money and make the world a shittier place.

I agree with all the points the OP made for self-driving car downfalls, but I still think that the people working on the tech have the best intentions.

It's just that we as humans do a bad job of having foresight and the critical analysis skills necessary to truly assess the whole of a technology, and not just the good parts. Maybe that reflects well on humans as it hints at an optimistic nature?

I see the potential benefits of self-driving cars, but am worried that the medium-term to long-term negative points will outweigh the positive. There should really be a council of techies, representatives, legal experts, and most importantly average citizens that sit down and heavily critique this technology to address all the negative points before it becomes widespread.


> I am skeptical that full self-driving cars will happen in the next few years, but he is completely wrong when it comes to the long term

I think when someone says 'will never happen' they mean 'not in the foreseeable future'.

Obviously the environment, usage and science can change such that full self driving could happen. I mean in the 1600s nobody could imagine a Boeing 747 flying loaded with hundreds of people, sure.

But the hype around self driving (by all the dreamers) has been that it's more or less 'right around the corner' not in 30 or 40 years or even 20 years.


> In their current state self driving cars would kill far more people than humans if all cars used self driving 100% of the time, and no one knows if we will ever move past that.

Really? No one knows if we will ever improve on self-driving cars than the current technology today? I'm going to stick my neck out and say "Yes, I know that will move past that."


>The issue is, that I can totally see a self driving car fuck a rare obscure situation up in the worst way — where even the worst drivers would probably do okay.

So? Unlike with humans, self driving networks of cars can learn from a rare obscure situation, then the entire network can be updated to handle it going forward, forever. Whereas "the worst drivers" (drunk speeder road ragers say) will just cheerfully plow into students in front of their schools yet again.

>The problem with self driving cars is that even if you statistically manage to be safer over the typical journey today, people still wanna know that it keeps them safe in very atypical conditions as well.

Meh. This is just a typical argument from incredulity/unfamiliarity. People have zero issue accepting significant risk given solid benefits and most people's driving most of the time is short and boring. That'll be Good Enough to get the ball rolling, and it'll only ever improve.

>And that as of now is not the case.

Where is the suggestion in the article that the current cars are ready to handle absolutely everything everywhere today? It's about incremental gains and safety in one particular very high population location.


> But we're a good 25 years out from a truly autonomous, road-worthy, self-driving human transport.

It sounds like your reasoning goes like this: my interactions with things which are supposedly AI are bad. Self-driving cars are also some form of AI, therefore self-driving cars must be bad and or far away too.

Did I understood you right?

The problem with that logic is that there is nothing in common in implementation/architecture/incentives between the things you mention and self-driving technology.

Nobody, well nearly nobody, tries to implement self-driving cars in a blackbox “AI” fashion. What I mean is that we don’t just throw sensor data at a neural network, then squint and say “i recon it is going to drive well now”. That would be madness.

Most approaches break down the problem into sub-problems covered by sub-systems. The sub-systems are fed information with known error properties and engineered to the specifications. The failure modes are painstakingly traced through and documented. Then in turn assemblies of these subsystems and the whole are reasoned similarly. Fault trees are drawn, the operational domain is considered. The reasoning why the engineers think the system is safe and have the right redundancies in place is more complicated than the code itself.

Some of these sub-systems are implemented using what one would call “AI”. Particularly in the object recognition domain that seems to be the state of the art. But the failure modes and shortcomings of these systems are considered and reasoned about the same way you would do the same with a good old-fashioned kalman-filter based sub-system. It is known that they are going to fail in various situations in various ways. The trick is to engineer the whole system such that it still remains safe despite these sub-systems having these characteristics.

I’m not saying that we will have safe self-driving soon. What i’m saying is that you can’t reason about self-driving cars by saying “commercial entity X is spamming me with bad marketing crap. People talk about AI behind said marketing crap. Therefore self driving cars are far away.”

next

Legal | privacy