Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Transportation

Waymo Simulated Real-World Crashes To Prove Its Self-Driving Cars Can Prevent Deaths (theverge.com) 72

In a bid to prove that its robot drivers are safer than humans, Waymo simulated dozens of real-world fatal crashes that took place in Arizona over nearly a decade. From a report: The Google spinoff discovered that replacing either vehicle in a two-car crash with its robot-guided minivans would nearly eliminate all deaths, according to data it publicized today. The results are meant to bolster Waymo's case that autonomous vehicles operate more safely than human-driven ones. With millions of people dying in auto crashes globally every year, AV operators are increasingly leaning on this safety case to spur regulators to pass legislation allowing more fully autonomous vehicles on the road.

But that case has been difficult to prove out, thanks to the very limited number of autonomous vehicles operating on public roads today. To provide more statistical support for its argument, Waymo has turned to counterfactuals, or "what if?" scenarios, meant to showcase how its robot vehicles would react in real-world situations. Last year, the company published 6.1 million miles of driving data in 2019 and 2020, including 18 crashes and 29 near-miss collisions. In those incidents where its safety operators took control of the vehicle to avoid a crash, Waymo's engineers simulated what would have happened had the driver not disengaged the vehicle's self-driving system to generate a counterfactual. The company has also made some of its data available to academic researchers.

This discussion has been archived. No new comments can be posted.

Waymo Simulated Real-World Crashes To Prove Its Self-Driving Cars Can Prevent Deaths

Comments Filter:
  • ...simulations are a sufficient test of the real world.

    Seriously, though, I would expect crash simulations from any auto-pilot project; it shouldn't make news. It's Critical Systems Development 101.

    • by Anonymous Coward
      "We've trained our algorithms to pass these tests. That ought to be good enough to put our cars on the road."
      • by Tablizer ( 95088 )

        "Your Honor, our car was trained on 2D avatars and cardboard cutouts. It wasn't ready for 3D people. Now they are also 2D, he he uh, um...sorry, was a bad joke, Your Honor."

    • by rtb61 ( 674572 )

      Simulations are simulations and will simulate what ever you want them to simulate. Call it real world all you want but I'll be there are no problems in getting a simulation to show a Waymo vehicle driving under the Atlantic to make it to Europe or driving to the Moon on a magical Waymo road.

      I sure all sorts of simulations of Waymo vehicles crashing all over the place can readily be made as well.

      The autopilot should not be developed in a car but on a mobile robot, that tests it within a virtual test track o

  • by alvinrod ( 889928 ) on Monday March 08, 2021 @05:51PM (#61138196)
    This doesn't surprise me. The automated vehicle (AI driver) isn't going to be drunk, texting, or engaging in any of the other behaviors that lead to a greater number of accidents. If we compare autonomous vehicles to humans in aggregate it's possibly misleading us into thinking they're safer than they really are. How much safer would driving be if we took the worst 1% of drivers off of the road? Assuming it follows a Pareto distribution (as these things tend to for whatever reason), then the answer is considerably.

    I don't know if the autonomous vehicles are as good as the people who are careful and attentive drivers. Perhaps they can do a better job of handling a vehicle during an accident because I suspect that outside of trained professionals most people aren't aware of what they should do or lack the experience of being in that kind of situation to correctly execute when under the stress of the situation, but there are plenty of examples of autonomous vehicles making bone-headed moves or being confused by situations that the average human driver wouldn't have a problem with.

    I don't think they're quite ready to take over everything, but if they're better in emergency situations or better than the worst drivers on the road I can see using them where they're most effective.
    • If we compare autonomous vehicles to humans in aggregate it's possibly misleading us into thinking they're safer than they really are. How much safer would driving be if we took the worst 1% of drivers off of the road? Assuming it follows a Pareto distribution (as these things tend to for whatever reason), then the answer is considerably.

      Here's the thing, though: Safe, attentive human drivers are good at assessing and responding to risk, judged by their driving records. Those attributes will make them unlikely to entrust their lives to a novel and relatively unproven technology. A sensible driver will learn to use that technology to augment and improve their own driving habits that already make them safe on the road.

      The people who are likely to buy a new gadget-laden vehicle and then start playing Candy Crush or Flappy Bird at the wheel are

      • is a legitimate concern that the driver-assist stuff will be good enough to make drivers complacent but not good enough to make up for that loss in vigilance

        If we're talking about Waymo, they have taken a firm stance against driver assist, for that reason. Waymo does expect or allow passengers to take control.

    • How much safer would driving be if we took the worst 1% of drivers off of the road?

      That's one good question to ask but the other question is are there any situations in which they are much worse than human drivers? It's easy to see how computers can do better at being always alert and reacting quickly and how that can avoid accidents but what about guessing the intentions of other drivers to do something stupid?

      It's not enough to show that you can avoid the mistakes that humans make you also have to show that you are not going to make new, different mistakes or drive in such a way tha

      • That's one good question to ask but the other question is are there any situations in which they are much worse than human drivers? It's easy to see how computers can do better at being always alert and reacting quickly and how that can avoid accidents but what about guessing the intentions of other drivers to do something stupid?

        I can tell you one thing: The first time I visited the USA and drove a car there, my instincts were all wrong. I had to be really careful, because people didn't do things I expected them to do. Like normally you just know when no indicator means "I'm not turning" and when it means "I'm turning, but I forgot the indicator".

        • I had a similar problem as well. What used to get me was that for some bizarre reason the US is too cheap to have separate brake and indicator lights so on the occasions where someone did remember to indicate I would sometimes catch the on-flash of their brake light to mean that they were braking and not that they were indicating or vice-versa.
          • by hawk ( 1151 )

            In decades of driving in the US, I have *never* had that problem.

            Turn signals come on on one side of the car and blink, while brake signals come on on on both signals *and are brighter*.

            Even at night, the distinction is clear even without thinking about it.

    • I don't know if the autonomous vehicles are as good as the people who are careful and attentive drivers.

      Good question. The answer is likely that okay drivers are as good as current AVs. In the US, many deaths [wikipedia.org] are due to drunk driving (20%), nighttime driving (44%), and teenagers (14%). Take out those types of drivers, and the death per miles rate is in the ballpark of what you would see for Tesla Autopilot or for Tesla drivers who don't use Autopilot (i.e., not poor or young people). Good drivers are even better.

    • by AmiMoJo ( 196126 )

      The way it's supposed to work is you design traffic rules such that if people obey them they will never get into a situation where they need mad skillz to avoid a serious injury or death.

      The rules include things like the layout of junctions and traffic light timing, not just how to drive.

      The problem is that human drivers don't follow the rules, or sometimes the rules are badly designed.

    • But it still means accepting a much higher risk of crashing for normal driver. The cars need to beat median human drivers, not average drivers pulled down by drunks and idiots.

    • From previous studies, AI vehicles actually increase the number of accidents, but they all trend to minor more often.

      Not that this study should be trusted. Of course the AI is good at simulations.

  • by Viol8 ( 599362 ) on Monday March 08, 2021 @05:56PM (#61138216) Homepage

    ...compared to a human pilot. It's not just how the automation does the job its supposed to do, it's what it might do that its NOT supposed to do that's the real issue. Edge cases are a bitch.

  • and not tested in non sunny AZ weather

    • Also not tested on a decade old autonomous car that is poorly maintained by its owner. Or are we assuming personal car ownership will end? Or are we assuming people will accept their personal Johnny Cab will disable itself if the tires are worn, or a camera lens is slightly smudged? Howe about testing where one autonomous car is a few years out of support life and no longer reacts properly to the latest collision avoidance behaviors pushed out by our new Martian overlord?

      • $500 for an 1TB ssd to load the new map data after 2-3 years dealer only service.

        • Just getting the latest updated maps for my 10 year old car is $300 today for a small SD card, and that is not at all the super accurate self-driving level stuff. I can easily imagine car companies wanting thousands for an update to their hyper accurate maps, the media to put it on will be marked up 10x simpy by stamping "Ford" on it, and almost surely it will be proprietary formatting that is encrypted, probably requiring an annual license fee to boot.

  • by fahrbot-bot ( 874524 ) on Monday March 08, 2021 @05:58PM (#61138222)

    Waymo ... Prove Its Self-Driving Cars Can Prevent Deaths

    Recalling this self-driving car safety setting from the Amazon show Upload [wikipedia.org] ... (selectable by the driver/occupants)

    (a) Prioritize Occupants
    (b) Prioritize Pedestrians

    • by kqs ( 1038910 )

      Good point. We should never let a self-driving car on the road which doesn't have the right setting for that option.

      By that logic, we should also never allow a human to drive a car on the road who doesn't have the right setting for that option.

      What is the right setting?

      The best solution, of course, is to not have drivers that make the many, many poor decisions needed to get to that situation, which means that that option is meaningless. Thus, self-driving cars.

      • I've been driving for over 45 years. Nary a crash, nary a pedestrian nudged. I obviously do not make "many, many poor decisions" while driving.

        Until self-driving cars match that record, why should I require one?
        • by Tablizer ( 95088 )

          Yes, Mother.

        • The other often overlooked point is the deception of averages.
          eg. If humans die at a rate of 2 in 100, but robot cars kills 1 in 100, that on average sounds better.
          But when the 2 in a 100 are some anonymous drunk drivers, and the 1 in 100 is you, then the calculation doesn't look so convincing.
          Averages are invariably always used to deceive (Gender Wage Gap!). Never trust any argument that relies on them.
        • by kqs ( 1038910 )

          Have you ever gotten yourself into a situation when driving where you *had* to harm either occupants or pedestrians? If not, then I'm not sure why you think my comment was directed at you.

          In general, anyone who thinks the trolley problem applies to cars (self-driving or otherwise) should probably not be driving. Or implementing self-driving. By the time the trolley problem comes up, you've made enough mistakes that the trolley problem is not the thing to worry about.

      • Oh, yeah. What exactly is the "right setting"?
      • Good point. We should never let a self-driving car on the road which doesn't have the right setting for that option.

        The correct setting is: Minimise total damage, with damage to people counting significantly higher than damage to objects. I would be willing to accept an option where damage to whoever motorist caused the situation will be weighed lower.

    • Yes but how long before someone hacks the update system to add an option (c) Prioritize insects and sets it to that option by default in the middle of the summer? If you have computer-controlled vehicles which are connected to a network for updates then computer security becomes a life-or-death matter at the consumer level and I am unconvinced that we are ready for that challenge yet.
    • The issue I have with these ethics "trolley problems", is that if the situation ever occurs, you have already failed.

      • The issue I have with these ethics "trolley problems", is that if the situation ever occurs, you have already failed.

        I recommend watching The Good Place, The Trolley Problem (Season 2, Episode 5). :-)
        (snippets available on YouTube)

        Actually, I recommend the entire series...

    • by AmiMoJo ( 196126 )

      This will never happen in real life. The old "choose to kill a bunch of school kids or take the occupant into the path of an 18 wheeler" meme isn't a real thing that any self driving car will ever consider.

      Human drivers are not supposed to make that choice either. If there is a dangerous situation and you were not at fault (e.g. driving at a reasonable speed for the conditions) and there really is no way to avoid injuring someone, it's not your fault if you simply do your best to stop. No legal requirement

    • This is the silliest idea. By the time AI gets good enough to remotely consider occupants vs. pedestrians, it will already be avoiding so many more accidents its net effect is vastly beneficial.

      You sould like a guy arguing he shouldn't be forced to wear a seatbelt because he might crash into a swamp and drown because he was strapped in. Meanwhile such are far more likely to be knocked out and definitely drown.

      • You also sound like the opposing lawyers suing my buddy's wife as part of a scam. They had a car load of people pull out and turn right, right in front of her. The lawyers claimed she was responsible to pay them because she could have avoided them by swerving into oncoming traffic.

      • Um... You need to lighten up. It was a joke and the TV show referenced is a comedy.

        (And I *always* wear my seatbelt.)

  • I assume that when I eventually voluntarily drive in a level 4 or 5 AV it will be better than a human driver. I have no problem accepting that.

    My quibble is what happens when it fails. If I die in a Waymo car when the Waymo computer was driving, does my family get to sue Waymo for a nice chunky payout to make up for their loss of my wages, burial expenses, pain, suffering, etc.? If so, can they handle a couple hundred of these per year? Because if they become a market dominator and reduce annual American ro

    • by jabuzz ( 182671 )

      You simply factor in the cost of the payouts you are going to need to make into whatever you charge. In most places insurance is a requisite for driving a car anyway. I would imagine using the self driving feature will still require insurance. I don't see the problem.

    • You assume Waymo would be considered 100% at fault. In many cases there are other at-fault issues, such as poorly designed roads, other drivers, failure to maintain the car properly, etc. In any case, $400 million is a drop in the bucket in the U.S. car market. I'll bet Ford averages at least $50 million a year in lawsuits already.
  • Of course, to be a balanced test, they should also be putting their cars into simulations of past situations where there *wasn't* a crash and seeing whether there is an increase in fatalities, over the human drivers.
  • We humans are hugely biased about perceived risks. People seem about more concerned about being eaten by a shark or struck by lightning than they do about being killed in a car wreck despite those being 10,000, and 1,000 times less likely respectively. Being killed by a Johnny Cab is likely going to be treated in a similar disproportionately irrational fashion. So, even if it will cost us many lives, I expect it will not be until the public can be convinced that autonomous cars are at least 100x safer th

    • We have one effective tool that works for helping us think quantitatively, and that is money. If self-driving vehicles are safer, they will end up being cheaper to use or own. People like cheap. Yes, there will be 60 Minutes episodes about plight of accident victims and the heartless irresponsible car companies. Congressional hearings sooner or later. But our mechanism for pricing accidents is vigorously exercised by ambulance-chasing lawyers, and so as a result, the safer option will end up the cheap
  • by dicobalt ( 1536225 ) on Monday March 08, 2021 @07:07PM (#61138556)
    Wonder how well it would work in a construction zone where the road doesnt have lines or the lines are wrong.
    • Probably completely fall apart, pull over to the side of the road, and 'phone home' so a remote HUMAN driver can bail it out. Because the half-assed excuse for an AI just so damn good.
      • pull over to the side of the road and die in death valley or get stuck in an cold snap vs limp to an safe area.

        Let an fake cop pull over and rape someone.

        • Yeah, there's that too. If it's a so-called 'level 5' SDC and there are literally no controls for a human operator in the vehicle, what happens if it can't handle something out somewhere there's no cell service? It pulls over and you're stuck until someone comes by and, what, tows you somewhere there's cell service?
          And so far as getting hijacked goes: been talking about that for years now. It'll be the new hobby of criminals: remotely hijacking SDCs.
          The whole thing is just such a total shitshow.
          • not just hijacking but pull over for any cop even rent a cop ones?

            • I wouldn't go so far as that. Private security are not police, they're private citizens working for a private company. They don't have the right to force drivers to pull over now, why should they have the right to do it to an SDC? Really, though, it's law enforcement and by extension the government you'd have to worry about, no doubt having the ability to command an SDC wirelessly to pull to the side and stop -- and no doubt criminal and terrorist organizations will find a way to hack into that to take cont
              • and when an private security company says for our sites we need local pull over / command of SDC's and ford just says ok?
                Or when the SDC just needs signal from the same device that the cops and fire use to change traffic lights to go into pull out of way mode?

                • Your best defense against this is to not own a self-driving car in the first place. WIll also have the nice side-effect of not dying senselessly when it fucks up.
    • by olau ( 314197 )

      If you search for "Tesla FSD beta" on Youtube, you can find videos of this scenario with Tesla's self-driving software.

  • Not buying it *or* riding in it. Human driven only.
    • At some point many antis will call for legiation outlawing human drivers once ai gets good enough.

      Like robot surgery, is human driving even ethical anymore at that point?

  • Looks like Tesla found a few edge cases when their cars ran into a fire truck at 60mph, how many other edge cases are these companies going to find when they scale their software up?

    • While true, human drivers appear to have issues with parallel processing. It turns out they're really only good at temporal multithreading but the thread switching behavior is terrible at real time performance guarantees. Sometimes the 'thread_phone()' maintains the lock too long and interrupt handlers like `isr_kids()` have poorly bounded resource requirements. The development team behind 'thread_phone()' is adamant that their thread's resource consumption isn't a problem despite it consuming at least 10 t

  • Not saying it doesn't have merit (I've no idea, haven't rtfa) but it's kinda trivial to overfit a model, with the result that outcomes in these comparatively few cases become better. There's almost no way not to overfit and not to overmilk past data. An overfitted model won't help on unseen cases.

    Also, how about close calls and just regular driving where nothing happened, does the software cause at least a few accidents where the human drivers didn't?

    • The computer cars can very plausibly not cause accidents.

      What causes accidents is unsafe driving. The computer car isn't going to get impatient and drive up your ass. A spider isn't going to come down out of the headliner and land on the computer car's face. (If one should occlude a camera, other cameras and sensors will hopefully compensate.)

      If the cars have enough sensors and drive paranoid enough they might conceivably avoid any reasonably avoidable accident. Yeah, that sentence was mostly weasel words.

  • Can anyone confirm which? I think there's a huge difference there.

  • Is this the same Waymo that directs me to drive the wrong way down a one way street?
  • You can prove a replacement fleet would net save deaths, perhaps a large net savings, and still have the effort ground to a halt by lawyers suing, if they can prove one of the few remaining deaths was caused by faulty ai design.

    This is ironic because the law profession justifies these lawsuits as providing an increase in safety by forcing companies to compensate.

    Hooray. They get mansions as the population continues to die from accidents.

    • This is no surprise; if the target had been to drive “slightly better” than humans then we are already there. AD cars are great at avoiding typical accidents, which are caused by inattentive, drunk, or distracted drivers – simply because the car will be neither of those things. However, AD cars will cause OTHER accidents – accidents that a beginner human driver could easily have avoided, or where it is difficult to explain why the AD car crashed. There is very little acceptance for t

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...