I'm not sure what you are getting at? As that article notes, in 2021 accidents only doubled while the autonomously driven miles more than tripled?
Waymo has been scaling their level of testing with their level of safety. What has changed is that Waymo think they are safe enough to start doing more widespread testing in SF without a safety driver.
This seems to be a strong counter to all those who were claiming that level 4 self-driving cars would be limited to flat, dry climates for the forseable future.
You are splitting hairs. Tens of thousands of rides per week. Autonomously. Those are the keywords. Other things they are bragging about involve such things as 24/7, night and day, and foggy conditions. I would suggest actually reading the short press release. They make quite a few interesting claims in it. Anyway, anyone in the SFO area will probably be reporting all the wonderful and uneventful rides they are enjoying with Waymo soon.
As for people that recently got their drivers license. I'm pretty sure that demographic is over-represented in the statistics of who drives the least safely, traffic fatalities, etc. Also insurers and rental car agencies have policies that reflect those cold, hard statistics. It will be interesting to see what they do when level 5 starts happening (probably sooner rather than later). My guess is that they'll charge people extra for the privilege of taking control of the car as they are far more likely to damage the vehicle and otherwise cause trouble.
And obviously one of the points Waymo is trying to make with their press release is that they are already safer. It's a press release of course and not the same as cold hard facts. And you make a fair point about self reporting. But it suggests the obvious notion that computers are getting pretty good at not crashing into stuff (or people). I find that entirely unsurprising, BTW. It does not seem like a particularly hard problem.
"Waymo and Cruise self-driving cars took over San Francisco streets at record levels in 2021 — so did collisions with other cars, scooters, and bikes" [1]
"...Many of the accidents, which the companies are required to report to the California Department of Motor Vehicles, occurred while the vehicles were operating in manual mode, with a safety driver in control.
...But according to an analysis by Insider, a majority of the 98 reported accidents in 2021 occurred while the vehicles were in autonomous mode, or within seconds after the autonomous mode technology had been switched off..."
Actually I think Waymo is too cautious. Tesla has thousands of customers effectively acting as test drivers for their autonomous software, which allows them to collect data at a much faster rate and discover more 'edge cases' than with a more cautious test environment.
33,000 people die each year on US roads and self driving cars offer the chance to dramatically reduce that figure over time. The more aggressively we can test self-driving software now, the faster the software can be improved.
So as long as the accident rate for autonomous Teslas is initially no more than for human drivers, the Tesla approach will lead to fewer deaths in the long run.
Waymo has millions of driverless miles at this point. Tesla has zero. I'm not sure how Waymo's safety driver miles is relevant here.
> Except, you know, millions of miles per day of self-driving in situations that Waymo isn't even planning to attempt for years.
And what exactly have they got to show for it? All that data collection and they can't handle the easiest of situations in San Francisco. You don't need nationwide data not to run stop signs or swerve onto oncoming traffic.
9.1 crashes in driverless vehicles per million vehicle miles driven
4.2 crashes in conventional vehicles per million miles driven
But to quote you link:
> The second is differences in driving conditions and/or vehicle characteristics. Public human crash data includes all road types, like freeways, where the Waymo Driver currently only operates with an autonomous specialist behind the wheel, as well as various vehicle types from commercial heavy vehicles to passenger and motorcycles.
The post I was responding to claimed freeway driving, which always has a safety driver right now.
As speeds increase, outcomes become less optimal. And the cars can't simply stop in the middle of the road and wait for a remote driver without putting the occupants at serious risk.
There is a lot more work for Waymo to do before they can drive in more conditions and locations.
Snow and heavy rain for example.
With almost half a million school buses providing transportation service daily they eclipse the number of trips in one month compared to the 7m+ Waymo is claiming.
> In 2018, "everyone" was saying perfect self-driving was only a few years away.
Yes. Having been involved in the DARPA Grand Challenge 20 years ago, I didn't think it would take this long. I thought we'd at least have automated rental car pick-up and return at airports by now. At least lots of automated shuttle buses on campuses and at airports.
Waymo is getting close.[1] It's worth looking at the accident reports and news stories with complaints about Waymo. People complain about Waymo cars clogging their street, which is happening because SF marked a few nearby streets as "slow streets", and Waymo's planner is optimizing routes around them. There's a recent news story where a Waymo car reached a construction trench in a street and stopped. Didn't go into the trench. Someone had to intervene remotely to back it out of the construction area.
There was a collision on Geary Boulevard where a Waymo vehicle detected a speeding car during a left turn and stopped, rather than accelerating to escape. That's their worse accident so far, and the other driver was at fault. There are not stories about Waymo cars making sudden lane changes and causing multi-car collisions.
[1] https://waymo.com/sf/
To put Waymo's 7.7/million accident rate in perspective, it would mean an average driver, who drives 13,500 miles in a year, has one accident in 10 years.
In other words, I don't think this is a concern at all. It's similar to humans already and will only get better.
I think economics and scaling are the main challenges and unknowns for Waymo. I hope they will soon finish phase 1 (perfect self-driving in a small area) and start pahse 2 (expand as fast as possible).
They've been operating without safety drivers in Phoenix, but not in SF. AFAIK, every Waymo car in SF has had a safety driver in it. Cruise has been testing driverless rides the past few months in that timeframe/speed limit.
> it's in the context of disengaging to let safety driver take over, in carefully controlled conditions.
There is no safety driver. This study is discussing the fully driverless level 4 autonomous vehicles that Waymo operates in California and Arizona.
This particular study show that those vehicles had accident rates 3-9x lower than human drivers, even after controlling for the limitations on where they operate (such as not operating on highways).
I can guarantee you there are more accidents per 1,000,000 miles driven in dense urban areas, and that's where Waymo has been driving as well. Last I checked they're not even operating on freeways.
Waymo does have safety drivers, they're just driving the vehicle remotely when it's in certain areas instead of being in the vehicle. So it isn't "full" self driving either.
> Tesla, who use cameras only, have not demonstrated full self driving
There are entire youtube channels with hours of continuous video showing Teslas driving around SF, but also other parts of California, with no human intervention.
Waymo has driven 20m+ autonomous miles btw. [0] The 1 million mile number appears to be rides with external passengers, but the first sources I found lead to dead links, so who knows.
Anyways, quite a difference. I’m not sure where the threshold should be set for proving safety. I recall the number for provable safety being something like 20bn+ miles, so no one is close if it’s set there.
They just announced 500k driverless miles in Phoenix. Over several years that sounds laughably small to me. For comparison, motor vehicle fatalities in the US are around 1.5 per 100 million miles. We're several orders of magnitude away from even being able to convincingly demonstrate that these reduce (or at least do not increase) fatalities in the real world.
I'm glad that Cruise beat Waymo to driverless in SF because they are providing the swift kick in the butt that Waymo seems to require to actually make progress in a reasonable amount of time.
The really interesting thing (to me) is that Waymo operated self-driving cars on public streets in Arizona without safety drivers during November 2017 but stopped within a month. (google waymo "november 7 2017") Why did they stop? I'm sure it was partially because they realized there were some situations it wasn't handling properly, but my conjecture is mostly that expectations have changed.
I bet that Waymo cars are massively safer than human drivers in many situations, and that they're not as safe in a few others, and that there are a bunch of situations they don't handle well and "freeze up" or otherwise behave unpredictably. Waymo has probably realized that the bar isn't "as good as a human" but "significantly better than humans".
The key metric is probably "obscure incidents" per miles driven, probably classified manually into various levels of danger. Once the "incidents that lead to disaster" count reaches 0 statistically, it will definitely roll out en masse without the need of safety drivers.
My guess is that they know how many miles they have to drive in order to reach that number, and it's a whole lot. Statistics and math stuff but you can probably pin it down to the month or quarter based on trends. Either that, or it's about driving every road in the city with all sorts of weather / traffic / pedestrian conditions until there's no issue. This isn't generalized AI driving (L5) but it's a much more logical approach to getting autonomous driving coverage where it's the most valuable.
My guess is that each city will involve a safety driver rollout until they have enough data to know the incident rate is zero. There might be a lot of variance between cities - maps data, weather conditions, customs, etc. Then remove the safety drivers.
I'm sure they also are experimenting with disaster/safety protocols while they do the roll out.
My prediction is that waymo will be a mainstream option within the next 5 years.
According to the state of California, Waymo had less than 1.5 million miles driven in 2019[1]. That is basically nothing when it comes to proving a track record of safety. The fatality rate in California is around 1 per 100 million miles. Waymo's cars can be twice as deadly as humans and there would only be a 3% chance of killing someone in a 1.5 million miles.
Isn't the author already wrong? There are Waymo autonomous cars operating right now on public roads in Chandler, Arizona without safety drivers. Sure, those are very carefully mapped and selected routes, but they are not closed routes, they have to deal with anything that can happen on a public road. As any engineer knows, the last 10% is always harder than expected, but scaling from 0.1% to 90% coverage seems fairly straightforward IMO.
This is an important step for testing, and I'm surprised that this is being spun as a move towards commercialization.
As of right now, waymo has only been doing what could metaphorically be called unit testing. That is, they test the cars behavior in very controlled but unrealistic environments, looking for very specific responses. The accident rate that they've incurred is likely ridiculously skewed: they've been driving in good weather, on meaningless routes (not chosen by destination, but by route features), at relatively safe times of day, at slow speeds, and they've been doing it extremely cautiously with engineers ready to take over in a moments notice.
This is exactly what they should have been doing, but politically it is misleading. Most human drivers, given those same constraints, would also do extremely well and way better than average. They've done well, but we have little basis for comparison with the average driver.
This is the first step towards integration testing. They get to see how the cars behavior integrates together across various scenarios that are much closer to real life. They are driving on actual routes that real people travel on...routes that aren't chosen in order to test a specific behavior.
Accident rates are going to go up. That's a good thing...its a move towards the things humans find more difficult too. We should, however, expect slowing progress to level 4 autonomy. This is typical of system capability growth; exponential in the beginning, asymptotic near the end. People that are rushing this are out of line; akin to immediate commercializations of lab rat successes. Give them time.
As of late 2017, Waymo/Google reported having driven a total of 4 million miles on roads. Given that Waymo is one of the biggest players, it's hard to see how all of autonomous cars have driven 100 million miles at this point.
Nevermind that the initial rollouts almost always involve locales in which driving conditions are relatively safe (e.g. nice weather and infrastructure). Nevermind that the vast majority of autonomous testing so far has involved the presence of a human driver, and that there have been hundreds of disengagements:
Humans may be terrible at driving, but the stats are far from being in favor of autonomous vehicles. Which is to be expected given the early stage of the tech.
Waymo has been scaling their level of testing with their level of safety. What has changed is that Waymo think they are safe enough to start doing more widespread testing in SF without a safety driver.
This seems to be a strong counter to all those who were claiming that level 4 self-driving cars would be limited to flat, dry climates for the forseable future.
reply