Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> It will be ridiculously difficult to get to Level 5 automation in cars. It's decades away, not years.

One could make the convincing argument that Waymo's vehicles are already at Level 5; where they probably struggle (I have no examples or data on this) is probably in inclement conditions (rain, snow, fog, etc). That said, even in such conditions, they probably perform much better than a human driver.

For instance, most human drivers in such conditions - even when they struggle to see the road clearly - continue to drive anyhow, mostly blind, instead of doing the right thing and pulling off to the side of the road and waiting, which I bet is a behavior that Waymo's vehicle performs when it struggles beyond a certain level.

In other words, Waymo's vehicle is likely better at determining when NOT to drive, and acting on that determination, instead of being stubborn and irrational in the face of evidence to the contrary.



sort by: page size:

> I agree Waymo is doing the best here, but Waymo is nowhere near a real self-driving car. They can't make unguarded left turns. Or drive in the dark, or the rain. Much less the snow, or in construction, or on poorly marked roads, or roads that aren't marked at all. Or city roads that aren't laser-mapped to the centimeter. Or roads with many pedestrians. Or cyclists.

I work for Google, opinions are my own.

I don't know the specifics myself but I trust what you're saying is true and agree those are real problems.

Nevertheless, one of the best things to do when you have a really hard problem is to simplify the problem. It's not as though we have to have fully self driving cars before they're released.

I think what would make a lot more sense is some middle ground where we have certain sections of the road where self driving cars will be able to work well and only allow them there.


>Once it seemed to change its mind about the optimal route a few times over the course of 10 seconds, switching safely between two lanes back and forth a few times before committing. It used its turn signal fine, and the lanes were clear, so it wasn't a problem, but this isn't something humans do.

Oh, I disagree, this is something I observe and in fact do myself quite a lot. We all run it through our minds which route might be the quickest spending on certain factors. The difference is Waymo (or any tech) will base this on actual data (i.e., getting there quicker) vs humans who will be more emotionally driven (i.e., frustration at the driver in front, wanting to take the more scenic route, being undecided about stopping at that cafe halfway).

I'm all for self driving in highly populated areas. In a perfect world I'd like to see it integrated into all vehicles, and when entering specific areas you are told your car will enter self-driving mode. Arguably this makes the most business sense for Waymo, licence the underlying tech to manufacturers that already have capacity to produce vehicles vs compete.


> - Drives better than the median human driver in average weather conditions on half of the roads in the US (alternatively, for harder goals, replace with US+Europe or World) - Drives better than 95% of people in 95% of weather conditions on 95% of roads in the US - Drives better than any human in any weather conditions on any road in the US

> I'd argue we're almost certainly past the first milestone, and probably at or approaching the second, and the third will never be reached, but that doesn't matter.

I agree on the first and third milestones. Or kinda -- perhaps the first milestone is only reached for California, or for US highways yet.

I think you underestimate how difficult the second milestone is. The variation is tremendous, even merely within the US, to say nothing about the rest of the world.

> Fwiw, I'm fairly confident that waymo's vehicles can handle flagmen and

I'm sceptical. I think there will be so much ambiguity, both through road situation and through human interaction which we interpret with comparable ease, that the car may be stumped. And we set the high hurdle of forgoing steering wheels, so we need some other, yet to be devised way to give the car hints safely and efficiently.

> have more experience driving in the snow than I do.

I wasn't thinking about handling the car (I can easily believe that a computer does a better job of that), but rather about decreased visibility, no visible road markings, etc.

> And I'm sure they can handle aggressive or

Fairly easy I guess. Just be defensive, put a bit of distance between you and them.

> undisciplined drivers.

Much harder. Not from a safety perspective (again, just be defensive), but from a I'd-like-to-arrive-at-some-point perspective. Because then again it becomes about human interaction (being assertive without being an asshole, muddling your way through this blocked intersection through essentially non-verbal communication with the drivers around you), and computers will (inherently, perhaps) be worse at that than humans.

> Conversely, there are situations that AVs will handle much much better than humans already (a neighborhood with a lot of kids running around and shrubs occluding them for example).

Except, again, that humans will be able to judge body language or the rules of a game the children are playing, and anticipate instead of being confined to reacting (which the computer will indeed be better at, I agree).

Perhaps my pessimism also stems from the fact that the oh-so-smart driving assistants in my car are so shitty (read my other posts if you're interested) :-)

EDIT: Thanks for the interesting discussion so far.


> self-driving car developers operate in the most difficult conditions they can find.

Ehhhhhhh. While I agree that these aren't the easiest driving environments in the world, they are certainly mild climates. If I were in the business of proving that waymo vehicles weren't ready for prime time I'd be taking them up the alcan in winter or Florida during hurricane season, not Phoenix.


> If you've ever driven behind a Waymo vehicle, they're annoyingly strict when it comes to following posted speed limits, stopping for obstacles, and reacting/erring on the side of caution. It's an infuriating exercise in patience but I hope that it will pay off in the long term.

I've noticed when I am riding with a good (human) driver who obeys pretty much all rules of the road and drives slower than the posted speed limit (at a speed they are comfortable driving) that there are always a few drivers behind us who will honk their horns, flash their lights, or even pass into the next lane just to pass back a little too close for comfort. I don't know if they are always horrible humans but that's besides the point.

How would a self-driving car react in such a situation? Would human drivers be better behaved around a Waymo vehicle because the Waymo car has a lot of cameras and sensors and can pretty much show a clear cut case of road rage?


> Self driving cars are a case of when, not if as so much being done in the field, progress has become a hot competition and slowly getting there.

My skepticism of self driving vehicles in general is in desert climates. I really want to see a self driving taxi the day after a blizzard in Chicago (or elsewhere in the upper midwest... or even NYC when there's a good storm) where on some streets two lanes in each direction become one, and intersections can become "well, I'm not stoping because the car isn't stoping, thankfully the other drivers understand this and aren't asserting right of way when things are slippery". Some roads are closed and the smart drivers are happily driving 20 under the speed limit on the highway behind a snow plow.

Until then, self driving really strikes me as more of a California or summer time thing.

They say they're working on it ( https://www.bloomberg.com/news/articles/2019-04-23/alphabet-... )

> Snowy conditions are a serious challenge for the laser-based Lidar and other sensors that self-driving cars use to see objects in their path. Waymo said it’ll start working to overcome weather issues in Detroit’s notoriously tough winter months.

but I'll believe it when I see it.


> I agree with the premise of the article. Humans are remarkably good at driving, and making a computer better will be hard.

The economic incentives for autonomous driving are interesting, for but just saving lives, collision avoidance is where to focus. If you have tech as good as Waymo, integrating that into a car so it intervenes when it's highly confident the driver doesn't know what they're doing and highly confident it knows the right thing to do would be huge.


> Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.

The current situation as basically the opposite: waymo's cars are better than humans in almost zero situations. It's hard to gain the my trust when your car can barely drive in a drizzle.


> The problem is where people set up the goal posts and have a double standard for human vs. ai driven cars. AI cars, for some reason, are expected to produce an error rate of 0 right off the bat and ever single incident gets used to feed an existing (and often legitimate) narrative about tech bro laziness and irresponsibility. Meanwhile 1.25 million people die in human caused car accidents every year and somewhere around an additional 30 million are injured. But that’s “just the world we live in.” Or that’s “your choice to take a risk when you choose to drive a car.” Or whatever.

If we set up the goalposts fairly, Waymo wouldn't be anywhere near receiving permissions for full autonomy until we had statistically significant evidence the latest build could reduce fatalities to human levels which are in the region of 1 per 100 million miles (a rate which includes all the people who aren't allowed on the road...). When they've only driven a few million miles, the fatal error rate - including potentially fatal collisions avoided by safety drivers - really should be zero.

We haven't got any evidence that fatal errors in AV technology approaches that level of rarity, will never have it for individual builds, and the best [limited] evidence we've got refuses to reject the null that despite being shielded from hard AI problems like left turns and supported by safety drivers and driven at low speeds, AVs are an order of magnitude more deathly than human-driven cars...


>which human drivers often don't do (and may not expect other vehicles to do).

This presents an interesting problem. It's obviously easier to program something to follow the law, given it's unambiguous. But the question is what are we optimizing for? The fewest crashes? That's probably the right thing to do given crashes are bad. In that case, isn't it better to do what people would expect other cars to do? But are Waymo constrained by the fact that if a self-driving car is programmed to get a ticket they could be held liable? Probably.

I think the moral side of self-driving cars is just as hard or a harder problem than the technical side, and we haven't made the decisions as a society that we need to. If the government doesn't step up soon to lay out how this is going to work, the corporations will. And guess what: they'll choose whatever costs them the least amount of money. Not what's best for society.


> I would wager they are far from the median human driver, and absolutely nowhere close to the top 10% of human drivers.

I think you would likely lose that wager. In the information Waymo has published about their safety methodology, they benchmark themselves against an always-attentive driver (this already rules out most accidents), which they still outperform by a large margin. Even a driver who is never distracted doesn't have constant 360 degree vision or near-instant reaction times.


> will eventually reach human level safety

That's a very strong statement with not much to back it up.

They drive fine in straight, wide, sunny, south US roads (and even there not always), they struggle even in US cities, put them in any European country and it's game over. Mountain roads in swizerland during a snow storm ? Foggy twisty roads in the woods ? These won't be solved easily, even Waymo's ceo acknowledged that fully autonomous cars won't be able to drive everywhere.


>That being said, it’s exceptionally clear that all the responsibility is on you, the driver

Virtually every study ever done on human-machine interaction shows that users will inevitably lose reaction time and attention when they are engaged with half automated systems given that constant context switching creates extreme issues.

Waymo did studies on this in the earlier days and very quickly came to the conclusion that it's full autonomy or nothing. Comparisons to human performance are nonsensical because machines don't operate like human beings. If factory floor robots had the error rate of a human being you'd find a lot of limbs on the floor. When we interact with autonomous systems that a human can never predict precision needs to far exceed that of a human being for the cooperation to work. A two ton blackbox moving at speeds that kill people is not something any user can responsibly engage with at all.


> If you look in to why Waymo started “full self driving” in Arizona and, now, San Francisco it’s because the cars cannot even drive in light rain and the entire fleet is garaged with even mild inclement weather. We are still lightyears away from actual self-driving cars that can manage variable weather and even slightly-unpredictable terrain.

It rains in SF and there’s even fog… The fog’s name is Karl by the way. It’s also an old coastal city that doesn’t have a sane layout that you might expect from a test area like Arizona. SF is not an easy challenge and there are plenty of self-driving startups that have tried there and failed.


>> Their self-driving technology seems far more advanced, I think it would be unfair to punish all the players for the bad actions of a few.

It's very unlikely that Uber and Waymo have radically different technology. The two companies probably differe only in business and testing practices- i.e. the environments and the conditions in which they are willing to test their cars.

Waymo simply seems to be more conscious about safety and therefore use its systems well within the safety limits of the technology. But, both companies have access to the state of the art (in the form of highly paid specialists) and you can't really expect one to be too far ahead of the other in terms of capabilities.

Edit: I'm guessing all this by comparison with results generally in machine learning and machine vision. With the exception of a big leap forward with CNNs in 2014, most improvements you ever see in the literature are in the order of a couple of percentile points at most so most systems differe by a very small margin. The same should go for industry entities using this technology in self-driving cars.


>But the ability of cars to analyze situations on the road and respond has barely shown improvement since the beginning of 2016. In key categories, like “incorrect behavior prediction” and “unwanted maneuver of the vehicle,” Waymo vehicles actually did worse in 2017 than in 2016.

Ouch. That doesn't bode well at all for the future of driverless cars.

Edit : -1 points: don't question the silicon valley hive mind, people ;)


> If it could scale up and still operate the same way...

Once they scale up, they will be encountering more other Waymo cars on the roads, eliminating a lot of the accidents where someone else crashed into Waymo, or made an upredictable move. So then they could afford to drive less conservatively.

We are pretty sure that our roads would be much safer if all cars were self-driving, but that cannot happen overnight and there will be an interim phase when robo-cars have to mix with human drivers, which is the hard engineering challenge.


> That's the current status of self driving cars.

Do you work in the field? I'm a bit skeptical about that. Waymo's self-driving car seems to do pretty well with things that aren't encoded in maps like road works and unexpected obstacles. I don't see why red lights would be exceptionally challenging to detect.


> This is what the self-driving cars industry insists on, but has nowhere near been proven

Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.

You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.

It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.

next

Legal | privacy