The really interesting thing (to me) is that Waymo operated self-driving cars on public streets in Arizona without safety drivers during November 2017 but stopped within a month. (google waymo "november 7 2017") Why did they stop? I'm sure it was partially because they realized there were some situations it wasn't handling properly, but my conjecture is mostly that expectations have changed.
I bet that Waymo cars are massively safer than human drivers in many situations, and that they're not as safe in a few others, and that there are a bunch of situations they don't handle well and "freeze up" or otherwise behave unpredictably. Waymo has probably realized that the bar isn't "as good as a human" but "significantly better than humans".
"At first, Waymo-trained drivers will supervise our Waymo One vehicles."
That's about where they've been for years now - almost autonomous, with a safety driver. Back in November, Waymo started sending some cars out without a safety driver. But they backed off on that.[1]
I'm disappointed. I thought Waymo was ready to launch a real self-driving system. But no, not yet.
Waymo hasn't been in any serious accidents yet because they have human safety drivers and because they don't put their vehicles in situations where they can cause a serious accident even with a safety driver.
In that sense, expecting that Waymo is doing better than the rest and it's going to get self-driving cars into production is a bit like expecting that a person who learned how to walk on a line drawn on the floor is going to perform tight-rope walking because they're more careful than all those other idiots who actually tried to walk on a rope.
The thing to keep in mind is that self-driving is hard, much harder than tight-rope walking. It's so hard that it remains unsolved, currently, and no amount of careful application of non-solutions will result in a solution.
I've seen Waymo's cars driving around plenty of times in Arizona. The ones without a human safety driver are so rare I haven't seen one in person yet. If they were actually confident they were close to FSD they would have more cars without human safety drivers.
If they're really safer than human drivers, like the Google-funded studies claim they are, then this seems like a positive rather than a negative. Perhaps we humans should be slowing down and driving more carefully?
But I'm not sure if we can trust these studies. I'd really like to see a completely independent evaluation, of how the safety of Waymo cars compares to human drivers, and to the safety other companies like Zoox.
Nobody has self-driving cars, not even Waymo. Every "self-driving car" has "disengagements" (a euphemism for "it stopped working") periodically every few thousand miles. That's simply unacceptable and needs to be sorted out. If a human driver was performing a panic stop for no reason or hitting a pedestrian or another car every 3 months or so we would take away their license.
Waymo is doing better than almost everyone else, a testament to their skill and their rigor, but they are still a long distance away from truly autonomous vehicles. The temptation is to say "well, they are so close, I'm sure they'll finish up the niggling details", but the reality is that the stuff they are failing at are some of the hardest remaining problems and will probably require at least as much work as they've put in already, if not significantly more, to tackle. I think we'll end up with self-driving cars eventually but probably not on a time scale of within the next 5-10 years, more likely it'll take that long or longer for the technology to reach maturity and then yet another decade plus for it to start seeing practical application.
Making the car more aggressive is not rocket science. The trouble is that this is where the safety benefit of self-driving cars will start to diminish. If you watch Chris Urmson's 2016 presentation at SXSW, he tells that the only accident Waymo had on a public road was when their car tried to push in front of a bus.
Interestingly enough, Waymo (back than Chauffeur) attempted the Tesla approach - and threw it out once they saw how people behave when the car "drives itself*".
W.r.t."self driving winter", I think Waymo are aiming to protecting themselves from that situation by developing close relationships with regulators, highlighting the "we're not them"-part and sticking to their safety-first approach.
They're far better than human drivers but that depends on the system. In my personal opinion, based on the sensor video that Waymo release a few weeks ago, their self-driving tech is far more focused on safety than Uber's and their vehicles are likely far safer than a human driver.
Waymo have self driving cars without safety drivers going around Phoenix so they at least kind of work. I don't know if that will roll our globally in the near term but it might.
So Waymo is pretty decent - generally if Waymo was equal to human performance, the number of times it caused an accident would similar to the number of times other drives caused accidents it was involved in but that isn't the case. Humans drivers are hitting Waymo cars way more.
"First, other vehicles ran into Waymos 28 times, compared to just four times a Waymo ran into another vehicle (and Waymo says its vehicle got cut off in two of these cases). Second, Waymo was only involved in three or four “serious” crashes, and none of them appear to have been Waymo’s fault."
This suggests that if we were to replace all humans with Waymo, at least in the nice temperate snow-less/ice-less cities they are currently driving in, accidents would possibly be cut by 4x.
I have a friend who interviewed at Waymo to be a safety driver. He told me they have a human at command central monitoring the car and the safety driver, talking back and forth. I see no reason why they would remove the human monitor after no longer using safety drivers. For liability reasons at least. My point is I'm not sure Waymo cars can actually be considered self- driving since they have a remote human standing by to take over.
They've been operating without safety drivers in Phoenix, but not in SF. AFAIK, every Waymo car in SF has had a safety driver in it. Cruise has been testing driverless rides the past few months in that timeframe/speed limit.
That's not actually a blocking problem, as you have the same issues with human drivers today. At the end of the day, just like literally everything else in life, self-driving tech is a statistical process with corresponding costs.
The bar for success here isn't perfection. Rather, it's whether or not self-driving vehicles are cheaper than human-driven vehicles. Part of the cost of a self-driving vehicle hinges on how often the vehicles end up in failure states necessitating remote human intervention and/or insurance payouts. It's fine if they sometimes fail, they just have to do so infrequently enough to be cheaper. And if Waymo is to be trusted, they already appear to be safer.
Though Waymo et al aren't really competing with unusually good drivers, more with average drivers and they are probably safer there, on the roads they have been trained for.
I agree and I wouldn't hold them against Waymo, but I think when you are developing a self driving car, you should stop analysing it like a crash between two humans with fault and blame, and start looking at it like a system.
If Waymos were having a seriously increased rate of non-fault crashes, that would still be a safety issue, even if every crash was ultimately a human's fault.
> They're not completely autonomous, but a remote driver can take control if there's an issue.
Waymo says they don't try to actually drive with a remote human driver, because lag. The control center can give hints, presumably along the lines of "turn around and take an alternate route", or "OK to go around obstacle", or "park and wait".
As they get more experience with these systems, they may be allowed a bit more autonomy after signal loss, enough to creep to a safe stopping position.
I'm not sure what you are getting at? As that article notes, in 2021 accidents only doubled while the autonomously driven miles more than tripled?
Waymo has been scaling their level of testing with their level of safety. What has changed is that Waymo think they are safe enough to start doing more widespread testing in SF without a safety driver.
This seems to be a strong counter to all those who were claiming that level 4 self-driving cars would be limited to flat, dry climates for the forseable future.
I bet that Waymo cars are massively safer than human drivers in many situations, and that they're not as safe in a few others, and that there are a bunch of situations they don't handle well and "freeze up" or otherwise behave unpredictably. Waymo has probably realized that the bar isn't "as good as a human" but "significantly better than humans".
reply