Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transportation

In Contrast To Cruise, Waymo Is Touting Its Vehicles' Safety In New Report (sfist.com) 55

Waymo has a new peer-reviewed study (PDF) to share that shows how safe its autonomous cars are compared to cars driven by humans. SFist reports: As the Chronicle notes, the study covers the 1.76 million driverless miles that Waymo's cars have registered in San Francisco so far, along with about 5.4 million miles registered elsewhere. It compares data about vehicle crashes of all kinds, and finds that Waymo vehicles were in involved in crashes resulting in injury or property damage far less often than human-driven cars. In fact, the "human benchmark" -- which is what Waymo is using to refer to human averages for various driving foibles -- is 5.55 crashes per 1 million miles. And the Waymo robot benchmark is just 0.6 crashes per 1 million miles. The overall figure for crash rates found Waymo's to be 6.7 times lower (0.41 incidents per 1 million miles) than the rate of humans (2.78 per million). This included data from Phoenix, San Francisco, and Los Angeles.

The report's "Conclusions" section is less than definitive in its findings, noting that the data of police-reported incidents across various jurisdictions may not be consistent or "apples-to-apples." "The benchmark rates themselves... varied considerably between locations and within the same location," the report's authors say. "This raises questions whether the benchmark data sources have comparable reporting thresholds (surveillance bias) or if other factors that were not controlled for in the benchmarks (time of day, mix of driving) is affecting the benchmark rates."

Still, the report, one of several that Alphabet-owned Waymo has commissioned in recent months, is convincingly thorough and academic in its approach, and seems to be great news for the company as it hopes to scale up -- starting with the enormous LA market. Waymo, like Cruise previously, has sought to convince a skeptical public that driverless vehicles are, in fact, safer than humans. And this is another step toward doing so -- even if people are going to be naturally wary of sharing the road with too many robots.

This discussion has been archived. No new comments can be posted.

In Contrast To Cruise, Waymo Is Touting Its Vehicles' Safety In New Report

Comments Filter:
  • ... didn't we just have another story about Waymo admitting that half the time their cars were actually just remotely being driven by a real human?

    • If thatâ(TM)s true, itâ(TM)s even more remarkable â" it would mean theyâ(TM)ve found a way to make it safer to drive a car over a cell network connection than by actually being in the car.

      • ...either that or statistics can be skewed in the same way that lies and damned lies can be.

      • by TWX ( 665546 )

        They're probably looking at the situation through the car's cameras and are making decisions that get passed to the car to perform, not directly activating the steering and accelerator themselves.

        I expect that it's more like driving the Mars Rover than like driving Spy Hunter or Carmageddon.

      • They found it safer to drive an automated car on the safest routes than it is for the worst human to drive the most dangerous car on the worst road. That's not really much of a feat.
  • "...even if people are going to be naturally wary of sharing the road with too many robots."

    Particularly when those robots are controlled by software developed on a mile high stack of shit nobody understands, by an enormous team of "developers" many of which do not even know what "real time" means, defended by a horde of thousand litigious attorneys with endless financing, and operated by a corporation with a CEO with the personal integrity of Elon Musk.

    • "developers" many of which do not even know what "real time" means,

      That's actually pretty sad. I never thought we'd have cars being controlled by such shit code.

    • I'd settle for low latency over real-time. But let's assume very few people appreciate the importance of executing methods at a scheduled time. With many core processors, dedicated cores should hopefully become easier. Too bad Linux scales so horribly for many cores
    • Sorry, but you are just displaying your out-of-date ignorant, backward thinking. Here in the 21st century it is a 1.6km high stack of shit.

  • Didn't adjust for (Score:5, Informative)

    by phantomfive ( 622387 ) on Wednesday December 20, 2023 @10:00PM (#64094947) Journal
    In the paper, they didn't adjust for drunk drivers. They also didn't let their own cars drive in bad weather. They didn't adjust for density of traffic on routes, and the paper suggests that Waymo cars drive on less traffic-dense routes (Reports from users suggest that the cars take circuitous routs to avoid dangerous intersections and roads. I haven't been able to verify this, but the paper admits it might be true). Also the paper doesn't account for weird incidents like randomly blocking traffic.

    The paper doesn't conclude that self-driving cars are safer than humans. Until we can be confident that they are safe, they should always have a safety driver.
    • They didn't adjust for the front end to fall off...
    • What exactly do you mean by "doesn't account for"? Like, HOW do you want the numbers adjusted to reflect "drunk drivers" or "weird incidents"? It's the raw numbers from over 7 million miles driven, there were undoubtedly lots of drunk drivers and weird incidents along the way, how exactly are they suppose to adjust their report to satisfy you?
      The real caveat with driverless AI car accident stats is that the cars drive only in certain areas, under specific conditions. WITHIN THOSE PARAMETERS AI cars are *inc

      • by jsonn ( 792303 )
        "99% of the accidents are caused by others" doesn't say much about how well the control program handles dangerous situations, it just says that it is good at avoiding critical situations in first place. That's an important difference. The baseline when it comes to safe operation of a car should not include humans that are not legally permitted to operate a car. It should not include cars that are not (or should not) be legally permitted to be operated due to misrepair.
        • by Erioll ( 229536 )

          Ya but on a societal level, it might still work out.

          For example, if you had say 5 million driverless cars, and 5 million human-driven cars, which cohort will produce less accidents? And not 5 million "not drunk, trained to X degree, etc," ALL of them. Because that's real life: you have to have the "crap" in the mix too, because that's what you're displacing.

          Every driverless car is displacing a human driver, so if 5 percent of those drivers are abominable, and 90 percent are OK, and 5 percent exception

          • by jsonn ( 792303 )
            I see little indications that most of the worst groups of human drivers will switch to self-driven cars, especially for socioeconomic reasons. It's also a strange baseline, as many (most?) of the situations that are resulting in human accidents are currently intentionally ruled out by the conditions chosen for self-driving.
          • Those should be 5 million humans with the most up to date collision avoidance equipment, only in places where the AI drives, and who are not under the influence. Since every AI car is a brand new car then they should be compared to those.
      • It's the raw numbers from over 7 million miles driven, there were undoubtedly lots of drunk drivers and weird incidents along the way,

        Ok, if a robot is going to drive my car, I want it to be safer than me, and I don't drive drunk.

        • i have no idea what you are trying to convey with these statements. The AI is also not driving drunk. The AI has to DEAL with other drivers who are drunk, just like you do, only it does it better (under the conditions it is allowed to drive in currently).

      • Of 187 reports of autonomous vehicles accidents, just two could be attributed to the poor performance of the systems." -https://www.iotworldtoday.com/transportation-logistics/human-error-causes-99-of-autonomous-vehicle-accidents-study

        I clicked through the links presented in this article, and I couldn't find the original source for this claim. You might want to update your source links.

        • It's a trivial Google search, there are many, many articles about this that all seem to roughly agree on teh stats. Here's Ars Technica who did an investigative article:
          "For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they

          • .....so the robocars are safer than human drivers based upon data from the crash reports filed by the operators of the robocars? I just looked at the crash reports filed by my alcoholic Uncle Jim on the many many vehicle accidents he has had on the drive home from the bar, and he too blamed the other drivers in every case.

          • As others have pointed out, you can't determine fault based on a report filed only by one side. In the case of Cruise, they were caught hiding information from the DMV (which is why they got kicked off the road).

            biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb."

            This does NOT give confidence in Waymo, because avoiding stationary objects is the easy part of making self-driving cars. If they're having trouble with that, then they're having trouble with other areas, too.

    • Re: (Score:2, Insightful)

      by thegarbz ( 1787294 )

      In the paper, they didn't adjust for drunk drivers. They also didn't let their own cars drive in bad weather.

      So what you're saying is that not only are Waymo drivers better than humans at driving, they are better at making good life choices as well?

    • The paper doesn't conclude that self-driving cars are safer than humans. Until we can be confident that they are safe, they should always have a safety driver.

      Waymo has been operating without safety drivers for quite some time now.

      • Yes. (They are also operating with safety drivers).

        My point is Waymo should not be operating without safety drivers.
  • by RossCWilliams ( 5513152 ) on Wednesday December 20, 2023 @10:14PM (#64094969)
    The problem with the study is obvious. I think its likely that Waymo is safer than some human drivers, that is a low bar. Perhaps safer than most, but this study doesn't really support that conclusion. The statistics look at averages which may be, and likely are, inflated by groups of people who have very high rates of accidents. Drunk drivers being just one example. The typical driver may have little to do with those numbers, going for years without any accidents. That said one promise of autonomous vehicles is that they will make our roads much safer. That is a promise they are likely to be able to keep assuming we really want safer roads. My guess is that instead we will choose to have them get us places that much quicker with the same level of safety and accidents.
    • The problem with the study is obvious. I think its likely that Waymo is safer than some human drivers, that is a low bar. Perhaps safer than most, but this study doesn't really support that conclusion. The statistics look at averages which may be, and likely are, inflated by groups of people who have very high rates of accidents. Drunk drivers being just one example. The typical driver may have little to do with those numbers, going for years without any accidents.

      That said one promise of autonomous vehicles is that they will make our roads much safer. That is a promise they are likely to be able to keep assuming we really want safer roads. My guess is that instead we will choose to have them get us places that much quicker with the same level of safety and accidents.

      Though "safer than human drivers" is only a proxy for the question people actually care about, which is whether they're "safe enough".

      And at a glance, they do pass that bar.

      That doesn't mean AVs are safer than humans in general, or even safer than sober humans those specific driving scenarios*. But it does look like they're safe enough to keep doing what they're doing, which is offering a driverless taxi in well mapped city.

      Assuming there aren't other issues, like blocking emergency vehicles or not working

      • But it does look like they're safe enough to keep doing what they're doing, which is offering a driverless taxi in well mapped city.

        [Citation needed]. "Good enough" should be "clearly better than humans."

        There's no reason to omit a safety driver. That doesn't make it harder to test the cars, or collect data.

        • But it does look like they're safe enough to keep doing what they're doing, which is offering a driverless taxi in well mapped city.

          [Citation needed]. "Good enough" should be "clearly better than humans."

          That's not the standard we use for humans.

          There's no reason to omit a safety driver. That doesn't make it harder to test the cars, or collect data.

          If this data is legit and there aren't any of the other caveats I mentioned, then Waymo is pretty safe doing what it's doing in Phoenix, San Francisco, and Los Angeles without safety drivers.

          So if it passes the safety bar then why not push further towards commercialization?

          • So if it passes the safety bar then why not push further towards commercialization?

            The paper doesn't demonstrate that it passes the safety bar.

            Note also that in terms of commercialization, Waymo isn't cheaper than Uber yet. It's still about developing the technology.

            • So if it passes the safety bar then why not push further towards commercialization?

              The paper doesn't demonstrate that it passes the safety bar.

              It demonstrates an extremely low accident rate.

              Like I said, there's additional questions (does this also block emergency vehicles?) that this paper doesn't address. But on the dimension of crashes the evidence looks compelling.

              Note also that in terms of commercialization, Waymo isn't cheaper than Uber yet. It's still about developing the technology.

              So? You don't need a driver, that obviously scales in a way that Uber doesn't.

              And if the tech is safe enough now there's no reason you can't commercialize during development.

              • by jsonn ( 792303 )
                If you intentionally remove the majority of factors responsible for human-attributed accidents, but compare with the general population, the accident rate will always look low. But it doesn't say much about the safety of the system. That's the entire point here, "Our cars drove X million miles with only Y accidents" is a reasonable useless metric.
              • It demonstrates an extremely low accident rate.

                No it didn't, read the conclusion. The error bars in the data are too wide to reach that conclusion.

                • It demonstrates an extremely low accident rate.

                  No it didn't, read the conclusion. The error bars in the data are too wide to reach that conclusion.

                  I read the conclusion, it doesn't support your statement, not does figure 2 that actually has error bars.

                  To be clear, I'm not saying the paper alone justifies large scale deployment, as I stated repeatedly, there's other important questions that need to be answered.

                  But it's hard to claim that they're significantly less safe than humans when they have multiple statistically significant metrics by which they have fewer accidents and injuries.

                  As for your follow up comment:

                  Yeah, and why not have continuous deployment with builds going out to the cars every hour? Ship it now, fix it in production! Nothing will go wrong.

                  What are you even talking about? Waymo

                  • What are you even talking about? Waymo isn't doing that, I'm not suggesting that, it's just some ridiculous straw man you threw out.

                    You want to remove safety procedures before the car is safe. I don't know why.
                    That is like the people who say there is no problem deploying to production and fixing it later. It's a simile.

                    The paper did not show that the cars are safe, nor did it conclude that. You seem to have concluded that from reading the paper, but it's not clear why. There is significant uncertainty in the data.

                    • What are you even talking about? Waymo isn't doing that, I'm not suggesting that, it's just some ridiculous straw man you threw out.

                      You want to remove safety procedures before the car is safe. I don't know why.

                      Again, what are you even talking about? What safety procedures am I removing?

                      I literally concluded with:
                      if they can do so with the same standard of safety.

                      The paper did not show that the cars are safe, nor did it conclude that. You seem to have concluded that from reading the paper, but it's not clear why. There is significant uncertainty in the data.

                      I could say the same to you.

                      The paper didn't conclude the "cars are safe" because that's not a definable standard.

                      But they did show that they had lower accident rates than human drivers in fairly comparable scenarios. It's not a perfect comparison, but that's the case with every study. In this case the biggest source of uncertainty is the fact that accid

                    • But they did show that they had lower accident rates than human drivers in fairly comparable scenarios.

                      They didn't.

                    • > Lets put it this way. They have 0.6 accidents per million miles and 0.41 injuries per million miles ...

                      We can't compare that to human per mile incidents because the Waymo cars are only driving on the 'easier' routes, under the 'easier' traffic conditions, in the 'easier' weather, at the 'easier' speeds, etc.

                      So if I'm not mistaken the study's mention of "5.55 crashes per 1 million miles" and use of the term "human benchmark" is akin to misdirection.

                      > If they can maintain that level of safety do you b

                    • > Lets put it this way. They have 0.6 accidents per million miles and 0.41 injuries per million miles ...

                      We can't compare that to human per mile incidents because the Waymo cars are only driving on the 'easier' routes, under the 'easier' traffic conditions, in the 'easier' weather, at the 'easier' speeds, etc.

                      I've seen that claim, I haven't seen it backed up.

                      Honestly, I'd expect Taxi routes to be on average, more accident prone, since they tend to be in downtown regions with little parking and lots of traffic.

                    • > I've seen that claim, I haven't seen it backed up.

                      Waymo has at least said so for the routes and weather.

                      > Honestly, I'd expect Taxi routes to be on average, more accident prone, since they tend to be in downtown regions with little parking and lots of traffic.

                      Yeah. But like you wrote it's not a perfect comparison, still I'm fairly sure that if it was possible to have humans drive the same routes in the same conditions that Waymo is, then Waymo is safer.

              • And if the tech is safe enough now there's no reason you can't commercialize during development

                Yeah, and why not have continuous deployment with builds going out to the cars every hour? Ship it now, fix it in production! Nothing will go wrong.

        • But it does look like they're safe enough to keep doing what they're doing, which is offering a driverless taxi in well mapped city.

          [Citation needed]. "Good enough" should be "clearly better than humans."

          Citation needed indeed.

          Why is "clearly better than humans" the required standard? The obvious default requirement is "as good as a fairly bad human driver", since that is the current standard for human drivers, though I think I'd probably argue for "as good as an average human driver".

          Since self-driving cars seem likely to provide significant benefits to society other than increasing safety, I could even see "only a little worse than humans" being an acceptable standard, trading off a little safety for

          • Since self-driving cars seem likely to provide significant benefits to society other than increasing safety, I could even see "only a little worse than humans" being an acceptable standard, trading off a little safety for lower costs and greater flexibility.

            That seems rational. At some point "safe enough" is more a matter of opinion, but it seems like you have a well thought-out standard.

  • But when road conditions are poor they take them off the streets. I've also heard that they can be incredibly annoying to be around because they drive with an absurd level of caution resulting in driving behavior that would get you pulled over by a cop if there was a human in there.

    This is all fine and good when there's relatively few of these things on the streets and when we're not depending on them to get us to work on a rainy or snowy day. But I do wonder how these would fair if they were ever actu
    • But when road conditions are poor they take them off the streets. I've also heard that they can be incredibly annoying to be around because they drive with an absurd level of caution

      I'd rather have that than the driving with the absurd level of asshattery that's common with human drivers.

      resulting in driving behavior that would get you pulled over by a cop if there was a human in there.

      Well that's fucked right there. Robot's aren't bad because humans and their society are fucked in the head.

      This is all fin

    • resulting in driving behavior that would get you pulled over by a cop if there was a human in there.

      Here's one example [theverge.com]. If you search for "Waymo blocking traffic" you'll find plenty of examples. I'll also add that I've seen with my own eyes a Waymo car blocking traffic needlessly (and weirdly). I don't think it was because of extreme caution, though.

  • by LostMyBeaver ( 1226054 ) on Thursday December 21, 2023 @01:42AM (#64095259)
    Are you suggesting that car companies can't code?
  • They are comparing to drivers "elsewhere".

    Unless they are only comparing to drivers in cars with the most modern collision avoidance systems and also on the same routes and in San Francisco weather then the study suffers from selection bias.
  • This report is a lie. It's pure marketing hype. We're six months away from it coming out that their cars get in accidents all the time.

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...