Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

We have self driving cars right now, see for example DARPA's urban challenge. We only need to increase reliability and decrease costs to make them suitable for the market. There are also some legal problems (e.g. who is responsible if the robot car kills a kitten), but technologically self driving cars are a reality.


sort by: page size:

I think we are perfectly capable of producing very safe self-driving cars, on a technological level. We have nearly automated other means of transportation that we consider to be among the safest ones nowadays.

The problem may be that we are trying to solve a lack of hardware using software. Level 4/5 self-driving probably requires specific infrastructure to be installed on all roads, a way to coordinate collision resolution among all cars of all manufacturers (something similar to TCAS in aviation, but operating with a higher number of vehicles involved and on a much shorter time frame), and most likely a ban on non self-driving cars.

Of course, this carries an economical and social cost that is unacceptable nowadays. So we are trying to make self-driving cars work on an infrastructure that was never designed for automated use (and sometimes even makes human drivers confused), by giving a regular car with a few extra sensors to an AI driver and trying to get it to achieve human-like capabilities as an intelligent agent, which is beyond our technological ability at the moment.


Infrastructure, machine learning, sensors, and other components are there. We already got self driving cars.

Now human adoption and acceptance is the problem.


We did all that in the DARPA Grand Challenge over a decade ago. Most of the problems today involve dealing with other road users.

(Some of the self-driving car projects today are less capable on bad roads than the off-road systems of 2005. They don't have to be; it just comes from focusing on lane following on freeways as the primary goal.)


What fundamental problems remain to be solved for self driving cars?

Any kind of self-driving system that is going to be usable and safe in the real world needs to be able to deal with the kind of roads we have in the real world.

Maybe one day, when the majority of cars are self-driving, road design will change. To get there, self-driving cars will need to prove themselves on today's roads so people will buy them.


Highway self-driving is mostly a solve problem (except maybe Tesla which will occasionally fuck it up). Urban areas are where the problems are at nowadays.

The problem is not self driving cars but the unsafe driving cars, you know the move fast and brake things, self driving AI is not ready yet, when it will be ready and safe I will accept them, I wish we get safety at the level that NASA or airplanes not the safety we have in web apps and regular software.

If they're only reliable in controlled conditions, they're not reliable. The really hard problems that would impact the feasibility of self-driving cars (as the popular imagination sees them) arise in unanticipated situations.

After all the work that was done with the DARPA Grand Challenge(s) in the past decade, it would be an embarrassment to Google if they couldn't get the cars to work in controlled conditions, rather than it being a staggering achievement in getting them to where things are at now.

I really want self-driving cars to happen, but I see the biggest impediment to that vision of the future being the general public's level of optimism and credulity WRT this stuff, to say nothing of the tech community's optimism and credulity. It's a domain that is a nearly infinite bucket of incredibly hard problems, problems that may ultimately prove insoluble as currently specified. If everything goes well, then maybe in 20 or 50 years a lot of transportation will take place in self-driving cars which operate with sets of known constraints in environments that are (to some degree) controlled.

If everything doesn't go well, then people will keep talking about how self-driving cars are inevitable, and how in a just a few years they're going to pick you up at your house and drive you to work using the exact same roads set up the exact same way as roads are now, doing the exact same commute that they might've been doing when driving themselves. This credulity will push the money and the technology forward, until too many disappointing setbacks occur, and then all the money dries up and nobody's talking about self-driving cars anymore.

If you tell somebody that something is inevitable and almost here and it's just a matter of throwing enough resources at it, it's much easier to get people to give you money to do those things, but as soon as anything happens that doesn't fit that script, they will assume you've been lying to them all along (or just aren't credible), and the R&D money goes away. Whereas if expectations are set appropriately, it's harder to get that money, but as long as there are achievable, realistic goals and people aren't basically pitching magic, then the funding is more likely to stick around during the rough patches.


It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.

I'll bet that to really make it work well, you also need to redesign roads to suit the new cars.

In any case, I'll believe it when I see widespread/universal adoption of self-driving in constrained environments (mining vehicles, warehouses, storage yards). Step two will be things like garbage trucks (slow, expensive, otherwise automated, phone home when it gets in trouble).


This is an odd way of looking at it. Far from being a problem for self-driving cars, the development of ever-more capable assistance and warning technologies is the rational way to go about refining the technologies that will be needed for fully self-driving cars.

This situation is only a problem for those manufacturers who want to pass off partial autonomy as the real thing.


The problem isn’t intractable at all. We had self-driving cars by 2015 and especially do today.

But we can’t let self-driving cars injure or kill even 25% as often as human drivers do, so we refuse to use them to their maximum capability.


Most self driving systems are using redundancy, people have only a couple of eyes.

I was involved in the development of a self driving vehicle and agree that we are still not there, that's why there are no commercially sold self driving vehicles.

OTOH we can clearly see that it is a possible goal, or at least a possible goal to try and achieve, for the coming years and not a very far future.

Remember that what we want to achieve is not necessarily zero accidents but a lower number than human driver achieve.


Driverless cars on roads where people's lives are at stake is a social-legal-regulatory issue and even when the software is solved these issues remain. Even when there are cars capable of driving unaided, humans will be legally responsible for them - so we need to factor that in too

The hardest problem isn't a technical one; it has to do with liability. There will be crashes and deaths, no doubt about it. But who is at fault now? The car maker (or whomever wrote the software)?

Because of this, laws regulating autonomous driving will be the bottleneck. I can envision something transitional: approval for long stretches of "easy" driving will come first. For example, driving on the I-5 from LA to SF.

For me, this is the perfect use case for a self-driving car so I can enjoy doing something else rather than paying attention to a boring drive. Personally, self-driving cars in a downtown area isn't as crucial. Unless I have no idea where the hell I'm going in a confusing city.


Even if it will be technically possible, and I'm pretty sure some day that will be solved. That's the easiest part.

The biggest hurdle for self-driving cars will be ethics, such as 'the car brakes for a child suddenly crossing the street but then kills the old woman' or perhaps even make decisions based upon social status. Lots of options here.

Those ethics have to be agreed upon, and they will likely differ per culture, state, country, etc. It's a political and sociological challenge. It will be interesting crossing borders.

And then those ethics need to be implemented in software. I would refrain from saying it's never going to happen, but I doubt very much this will happen in our lifetime.


The self driving car doesn't need to be perfect, it needs to be better than humans. That is a much lower bar. Self driving cars have already demonstrated their ability to handle traffic better than humans, but there are other situations where they are much worse.

They have the potential, but I think that in practice we are at minimum decades away from having self-driving cars that are as safe as human drivers on average in general conditions.

Thiel views the area of self-driving cars as less regulated (next to biotech), but for some reason I think that to make it come true the challenge will be more than technical. The current state of affairs in the legal system is that there always has to be someone that may take the blame when something goes wrong. Verdicts ending in "accidents" are not accepted lightly, especially when human-made things are involved. The entire thing regarding the self-driving cars has something like a ticking bomb under it waiting to explode in terms of regulations!

One solution to bridge the gap between 'mostly' self driving and 'totally' self driving is to increase the safety systems.

Currently, safety systems for the people in the car are very good. But for people outside the car, they are very bad. If you can make it such that the car is very safe for all the people in a possible accident, then maybe the AI problem won't be such a problem.

Granted, that's a hard problem too, but we're pretty good with the theorems and modeling that goes into safety. It's more of an economics problem, not a design one.

next

Legal | privacy