Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I find it very curious that the article doesn’t mention self driving vehicles. To me that is an example for a daring risk undertaken mostly by VCs.

The upside is huge. If we can make a robotic system which can safely pilot a vehicle on our existing roads, and make it commercialy viable that will reshape how transportation is done, how cities are built, and how people live.

Is it risky? Oh yeah. Everyone knows that computers are full of faults, sensors are crappy, hardware breaks all the time and every software is a pile of bugs. These are given, and unlikely to just change on their own. Can we, despite all the above engineer a system which provides superhuman level of driving safety, yet it costs less than a driver on the local minimal wage? This is the premise of a self-driving car.

Follow on question: is it possible to get there from here without antagonising the public with accidents? If a restaurant gives food poisoning to folks on the other side of town that won’t affect your restaurant’s business. Any self driving accident happening anywhere on the earth have the potential to ratchet up the scrutiny on every other companies. ( Think of something Hindenburg disaster equivalent.)

I believe the answer is yes we can, it takes carefull engineering and a lot of work but it can be done without needing to invent general artificial inteligence. VCs seems to agree with me because they are positively pumping money into companies in this field. What is that if not risk taking?



sort by: page size:

I think we are perfectly capable of producing very safe self-driving cars, on a technological level. We have nearly automated other means of transportation that we consider to be among the safest ones nowadays.

The problem may be that we are trying to solve a lack of hardware using software. Level 4/5 self-driving probably requires specific infrastructure to be installed on all roads, a way to coordinate collision resolution among all cars of all manufacturers (something similar to TCAS in aviation, but operating with a higher number of vehicles involved and on a much shorter time frame), and most likely a ban on non self-driving cars.

Of course, this carries an economical and social cost that is unacceptable nowadays. So we are trying to make self-driving cars work on an infrastructure that was never designed for automated use (and sometimes even makes human drivers confused), by giving a regular car with a few extra sensors to an AI driver and trying to get it to achieve human-like capabilities as an intelligent agent, which is beyond our technological ability at the moment.


I feel like all these self-driving companies aren't actually in it to achieve the success. They know it's a long way off, they're just trying to hit enough milestones to get scooped up by somebody bigger. Perhaps with the exception of Tesla, who I have no explanation for. Self-driving cars are so far down the totem-pole in my opinion--if we're going to have to rework our roadways in order to accommodate self-driving cars, why not just innovate there?

A roadway can be MUCH smarter than a car. Each segment of smart road is unique, and only has to worry about itself and whatever cars are present in the area. Smart roads could be managed by a few operators (in fact one operator could shepherd many road segments).

Each smart car has to be able to account for all possible kinds of roads. Smart cars each require an operator who is essentially just sitting there waiting for the car to fail (and therefore the failures become more catastrophic as the tech gets better, since trust rises and attention spans fail).

Put the tech into the ROADS, and just let the cars listen.


I'm very excited about the possibility of self-driving cars. Let the car drive while I take a nap or read a book.

But I have a hard time believing that the technology is anywhere close to being mature. You need a lot of contextual knowledge to drive safely in unusual circumstances. I totally believe that within well-defined limits, AI already outperforms humans, but traffic has no well-defined limits. Anything can happen.


I asked this question before and received no answers, so I'll ask again: Is there a source that details the specific issues that self-driving cars could face, and potential methods of solving them?

Self-driving vehicle development has so many difficult problems to solve. The technology itself is extremely difficult to create and relies on this extremely complex compromise where vehicles are allowed (and are) operated by code but ultimately babysat by drivers.

I think self-driving technology is worth the risk and worth the large amount of property damage and human injury that comes with it. However, this is a societal issue and there needs to be some sort of referendum on this. While many parallels can be drawn with space travel, those projects were tightly contained and included individuals who were specifically trained and informed of the potential dangers--there is no way to really do this at scale for self-driving tech which necessarily demands an expansive ecosystem that can't be tightly controlled nor can consent be inferred from the unknown number of people involved in this experiment.


I'm pretty strongly of the position that high levels of self-driving are impossible for artificial intelligences to attain, because they're built by corporations. Driving is VERY risky, and involves an irreducible assumption of both risk and liability on behalf of the driver. Humans are natural risk-takers. Corporations are not; and any artificial intelligence a corporation produces is always, at some level of meaningful abstraction, a projection of their own values.

I’m not an expert in self driving car engineering. I have some experience with ML but not in a significant professional way. What I have is a decade of professional experience building all kinds of software and a skeptical eye.

You have a problem that is unconstrained, with infinite variables, where even a simple mistake can have catastrophic outcomes. Society itself may object to self driving cars for a ton of reasons, from safety to simply driving like a grandma and slowing everything and everyone down. The cost to develop this technology, plus the added cost of hardware to each car, will be enormous and is not obviously a cost savings over a $15 per hour human. If the self driving car is doing anything other than getting from A to B, you still need a human (or a human-like robot) to handle the unloading / delivery / whatever at the end.

Now, I’ve worked at companies with extremely talented and intelligent engineers, and something as constrained and seemingly simple as making a login form can take a long time to perfect - and no lives are at risk! Just imagine the challenges and requirements for building self driving cars. New hardware, software, real-time processing and analysis of tons of data, all to drive split-second decisions that can kill people if done incorrectly.

Huge challenge - huge risks - huge money - uncertain payoff. This is not something that will appear suddenly. If there aren’t convoys of self driving trucks operating in desert highways overnight, where it’s dry and straight and flat and no one else is there, then we aren’t going to see city taxis for a very long time.


Interesting as something to explore, but I'm curious if a network of self-driving cars can be optimized to recognize and avoid these situations altogether.

I had an epiphany about self driving recently late at night on the streets of San Francisco. It was a somewhat deserted part of town, and the roads were empty. There was a lone Cruise car chugging along and constantly stopping at red lights. The signals were so badly optimized that the car would be the only one waiting at a completely empty intersection for minutes at a time, only to be stopped again at the next one, and the one after that. A single mile of empty road must have taken this car 15+ minutes to traverse. All the top engineering, sensors and ML algorithms in the world and the car was still at the mercy of a basic city planning failure.

We have been conditioned to think of autonomous driving as the answer to the traffic nightmares in every city, but tech will never ever be able to solve a social problem (just like how building a wider road will never ease congestion). Had even a fraction of the hundreds of billions of dollars that have been poured into self driving so far been spent towards, say, improving sidewalk quality, making traffic lights smarter, putting sensors along roads, coming up with a standard communication protocol for these sensors, building out public transit, improving urban planning etc. over the last 15 years, we would all be living better lives today. But a VC would rather set that money on fire to have a small chance at a 100x return.


The issue is that self driving needs to be better than human drivers by a considerable margin. The public will not tolerate an AI driver that makes the same mistakes as humans, they expect better. To do this it will need access to sensors that humans don't have

If they're only reliable in controlled conditions, they're not reliable. The really hard problems that would impact the feasibility of self-driving cars (as the popular imagination sees them) arise in unanticipated situations.

After all the work that was done with the DARPA Grand Challenge(s) in the past decade, it would be an embarrassment to Google if they couldn't get the cars to work in controlled conditions, rather than it being a staggering achievement in getting them to where things are at now.

I really want self-driving cars to happen, but I see the biggest impediment to that vision of the future being the general public's level of optimism and credulity WRT this stuff, to say nothing of the tech community's optimism and credulity. It's a domain that is a nearly infinite bucket of incredibly hard problems, problems that may ultimately prove insoluble as currently specified. If everything goes well, then maybe in 20 or 50 years a lot of transportation will take place in self-driving cars which operate with sets of known constraints in environments that are (to some degree) controlled.

If everything doesn't go well, then people will keep talking about how self-driving cars are inevitable, and how in a just a few years they're going to pick you up at your house and drive you to work using the exact same roads set up the exact same way as roads are now, doing the exact same commute that they might've been doing when driving themselves. This credulity will push the money and the technology forward, until too many disappointing setbacks occur, and then all the money dries up and nobody's talking about self-driving cars anymore.

If you tell somebody that something is inevitable and almost here and it's just a matter of throwing enough resources at it, it's much easier to get people to give you money to do those things, but as soon as anything happens that doesn't fit that script, they will assume you've been lying to them all along (or just aren't credible), and the R&D money goes away. Whereas if expectations are set appropriately, it's harder to get that money, but as long as there are achievable, realistic goals and people aren't basically pitching magic, then the funding is more likely to stick around during the rough patches.


I think the point here is not that the self driving software is already beter than humans but rather that the upside of cars driving themselves is so big that it is worth pursuing until we get it right.

Im skeptical you can do real autonomous driving without context. In DC, we have roads that flow one direction part of the day and another direction the other part. We've got roads that will be shut down unpredictably when there is a diplomatic event. We've got constant construction, where a two lane road might be reduced to one lane with a human holding a sign or using hand signals to usher cars through on their turns. How does a self-driving car handle that without understanding context?

It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.

I'll bet that to really make it work well, you also need to redesign roads to suit the new cars.

In any case, I'll believe it when I see widespread/universal adoption of self-driving in constrained environments (mining vehicles, warehouses, storage yards). Step two will be things like garbage trucks (slow, expensive, otherwise automated, phone home when it gets in trouble).


It’s worth noting that many of us consider self driving to he essentially analogous to full AI in its complexity and difficulty, and thus believe that we are literally nowhere near anything that fits this description.

No amount of “data” is going to solve the problem. Driving a car requires making a full mental model of the immediate world you’re in and creating accurate predictions of what everything else in that world is about to do, many of those things being sentient beings with whom you’re communicating through your own actions.

Nothing that Tesla is doing is getting much closer to that. The next major landmark on this timeline is a team successfully passing the Turing test, not a car moving across a parking lot.

If you share that opinion, that makes the Tesla a lethal toy.


If it requires human oversight, what's the point?

At that point you're getting no more bang for your buck, since the operator is going to be subject to the same limits on time behind the wheel as a driver of a non-autonomous truck. And you're not getting any more safety, because, as Uber and Tesla have been illustrating for us so vividly, a self-driving system that needs a human overseer can't drive safely, and a human who isn't physically in control of the car at all times can't oversee safely.

(Edit: This is, naturally, not accounting for the need for a transitional period while getting the technology bootstrapped. But that's time invested in developing the tech, not time where the tech generates any profit.)


That's gotta be a bigger project in terms of man power and money than anyone has invested in self driving cars already. We need more testing to even establish the problem areas and figure out how to actively solve them.

Forgive my crankiness, but: Does this say anything that the casual follower of self driving cars doesn't already know? All I got out of this was:

- Self driving cars would be awesome, for well known reasons.

- We have to use a lot of good sensors to make this work.

- It's an interesting problem.


Well, what I fail to understand (but do not find unbelievable) is how people dare leave their car's control to software just released. I'm sorry for all the incidents, think Tesla is to blame here, and would like electric to replace gas, so no cynicism intended, but that's just madness to me, the openness of people to alpha-test sth. that may easily cost them their lives. It's called autopilot, so what, while I admit it's bad naming, one should be responsible for their safety, before anybody else.

On the other hand self driving vehicles should be a different class of vehicles that's aptly moderated by the govts. A road is a complex thing to be in as an AI backed vehicle, especially considering the lack of the communication and organisation systems used in naval and aerial vehicles.

next

Legal | privacy