Waymo Simulated Real-World Crashes To Prove Its Self-Driving Cars Can Prevent Deaths (theverge.com) 72
In a bid to prove that its robot drivers are safer than humans, Waymo simulated dozens of real-world fatal crashes that took place in Arizona over nearly a decade. From a report: The Google spinoff discovered that replacing either vehicle in a two-car crash with its robot-guided minivans would nearly eliminate all deaths, according to data it publicized today. The results are meant to bolster Waymo's case that autonomous vehicles operate more safely than human-driven ones. With millions of people dying in auto crashes globally every year, AV operators are increasingly leaning on this safety case to spur regulators to pass legislation allowing more fully autonomous vehicles on the road.
But that case has been difficult to prove out, thanks to the very limited number of autonomous vehicles operating on public roads today. To provide more statistical support for its argument, Waymo has turned to counterfactuals, or "what if?" scenarios, meant to showcase how its robot vehicles would react in real-world situations. Last year, the company published 6.1 million miles of driving data in 2019 and 2020, including 18 crashes and 29 near-miss collisions. In those incidents where its safety operators took control of the vehicle to avoid a crash, Waymo's engineers simulated what would have happened had the driver not disengaged the vehicle's self-driving system to generate a counterfactual. The company has also made some of its data available to academic researchers.
But that case has been difficult to prove out, thanks to the very limited number of autonomous vehicles operating on public roads today. To provide more statistical support for its argument, Waymo has turned to counterfactuals, or "what if?" scenarios, meant to showcase how its robot vehicles would react in real-world situations. Last year, the company published 6.1 million miles of driving data in 2019 and 2020, including 18 crashes and 29 near-miss collisions. In those incidents where its safety operators took control of the vehicle to avoid a crash, Waymo's engineers simulated what would have happened had the driver not disengaged the vehicle's self-driving system to generate a counterfactual. The company has also made some of its data available to academic researchers.
Of course we all know that (Score:1)
...simulations are a sufficient test of the real world.
Seriously, though, I would expect crash simulations from any auto-pilot project; it shouldn't make news. It's Critical Systems Development 101.
Re: (Score:1)
Re: (Score:1)
"Your Honor, our car was trained on 2D avatars and cardboard cutouts. It wasn't ready for 3D people. Now they are also 2D, he he uh, um...sorry, was a bad joke, Your Honor."
Re: (Score:1)
Simulations are simulations and will simulate what ever you want them to simulate. Call it real world all you want but I'll be there are no problems in getting a simulation to show a Waymo vehicle driving under the Atlantic to make it to Europe or driving to the Moon on a magical Waymo road.
I sure all sorts of simulations of Waymo vehicles crashing all over the place can readily be made as well.
The autopilot should not be developed in a car but on a mobile robot, that tests it within a virtual test track o
Human Drivers (Score:3)
I don't know if the autonomous vehicles are as good as the people who are careful and attentive drivers. Perhaps they can do a better job of handling a vehicle during an accident because I suspect that outside of trained professionals most people aren't aware of what they should do or lack the experience of being in that kind of situation to correctly execute when under the stress of the situation, but there are plenty of examples of autonomous vehicles making bone-headed moves or being confused by situations that the average human driver wouldn't have a problem with.
I don't think they're quite ready to take over everything, but if they're better in emergency situations or better than the worst drivers on the road I can see using them where they're most effective.
Re: (Score:2)
If we compare autonomous vehicles to humans in aggregate it's possibly misleading us into thinking they're safer than they really are. How much safer would driving be if we took the worst 1% of drivers off of the road? Assuming it follows a Pareto distribution (as these things tend to for whatever reason), then the answer is considerably.
Here's the thing, though: Safe, attentive human drivers are good at assessing and responding to risk, judged by their driving records. Those attributes will make them unlikely to entrust their lives to a novel and relatively unproven technology. A sensible driver will learn to use that technology to augment and improve their own driving habits that already make them safe on the road.
The people who are likely to buy a new gadget-laden vehicle and then start playing Candy Crush or Flappy Bird at the wheel are
Re: (Score:2)
If we're talking about Waymo, they have taken a firm stance against driver assist, for that reason. Waymo does expect or allow passengers to take control.
New and different mistakes. (Score:2)
How much safer would driving be if we took the worst 1% of drivers off of the road?
That's one good question to ask but the other question is are there any situations in which they are much worse than human drivers? It's easy to see how computers can do better at being always alert and reacting quickly and how that can avoid accidents but what about guessing the intentions of other drivers to do something stupid?
It's not enough to show that you can avoid the mistakes that humans make you also have to show that you are not going to make new, different mistakes or drive in such a way tha
Re: (Score:2)
That's one good question to ask but the other question is are there any situations in which they are much worse than human drivers? It's easy to see how computers can do better at being always alert and reacting quickly and how that can avoid accidents but what about guessing the intentions of other drivers to do something stupid?
I can tell you one thing: The first time I visited the USA and drove a car there, my instincts were all wrong. I had to be really careful, because people didn't do things I expected them to do. Like normally you just know when no indicator means "I'm not turning" and when it means "I'm turning, but I forgot the indicator".
Re: (Score:2)
Re: (Score:2)
In decades of driving in the US, I have *never* had that problem.
Turn signals come on on one side of the car and blink, while brake signals come on on on both signals *and are brighter*.
Even at night, the distinction is clear even without thinking about it.
Re: (Score:2)
I don't know if the autonomous vehicles are as good as the people who are careful and attentive drivers.
Good question. The answer is likely that okay drivers are as good as current AVs. In the US, many deaths [wikipedia.org] are due to drunk driving (20%), nighttime driving (44%), and teenagers (14%). Take out those types of drivers, and the death per miles rate is in the ballpark of what you would see for Tesla Autopilot or for Tesla drivers who don't use Autopilot (i.e., not poor or young people). Good drivers are even better.
Re: (Score:2)
The way it's supposed to work is you design traffic rules such that if people obey them they will never get into a situation where they need mad skillz to avoid a serious injury or death.
The rules include things like the layout of junctions and traffic light timing, not just how to drive.
The problem is that human drivers don't follow the rules, or sometimes the rules are badly designed.
Re: (Score:2)
But it still means accepting a much higher risk of crashing for normal driver. The cars need to beat median human drivers, not average drivers pulled down by drunks and idiots.
Re: (Score:2)
From previous studies, AI vehicles actually increase the number of accidents, but they all trend to minor more often.
Not that this study should be trusted. Of course the AI is good at simulations.
I'm sure Boeing had similar simulations for MCAS.. (Score:5, Insightful)
...compared to a human pilot. It's not just how the automation does the job its supposed to do, it's what it might do that its NOT supposed to do that's the real issue. Edge cases are a bitch.
Re: (Score:3)
and not tested in non sunny AZ weather (Score:2)
and not tested in non sunny AZ weather
Re: (Score:2)
Also not tested on a decade old autonomous car that is poorly maintained by its owner. Or are we assuming personal car ownership will end? Or are we assuming people will accept their personal Johnny Cab will disable itself if the tires are worn, or a camera lens is slightly smudged? Howe about testing where one autonomous car is a few years out of support life and no longer reacts properly to the latest collision avoidance behaviors pushed out by our new Martian overlord?
$500 for an 1TB ssd to load the new map data after (Score:2)
$500 for an 1TB ssd to load the new map data after 2-3 years dealer only service.
Re: (Score:2)
Just getting the latest updated maps for my 10 year old car is $300 today for a small SD card, and that is not at all the super accurate self-driving level stuff. I can easily imagine car companies wanting thousands for an update to their hyper accurate maps, the media to put it on will be marked up 10x simpy by stamping "Ford" on it, and almost surely it will be proprietary formatting that is encrypted, probably requiring an annual license fee to boot.
Safety settings ... (Score:3)
Waymo ... Prove Its Self-Driving Cars Can Prevent Deaths
Recalling this self-driving car safety setting from the Amazon show Upload [wikipedia.org] ... (selectable by the driver/occupants)
(a) Prioritize Occupants
(b) Prioritize Pedestrians
Re: (Score:2)
Good point. We should never let a self-driving car on the road which doesn't have the right setting for that option.
By that logic, we should also never allow a human to drive a car on the road who doesn't have the right setting for that option.
What is the right setting?
The best solution, of course, is to not have drivers that make the many, many poor decisions needed to get to that situation, which means that that option is meaningless. Thus, self-driving cars.
Re: (Score:2)
Until self-driving cars match that record, why should I require one?
Re: (Score:1)
Yes, Mother.
Re: (Score:2)
eg. If humans die at a rate of 2 in 100, but robot cars kills 1 in 100, that on average sounds better.
But when the 2 in a 100 are some anonymous drunk drivers, and the 1 in 100 is you, then the calculation doesn't look so convincing.
Averages are invariably always used to deceive (Gender Wage Gap!). Never trust any argument that relies on them.
Re: (Score:2)
Have you ever gotten yourself into a situation when driving where you *had* to harm either occupants or pedestrians? If not, then I'm not sure why you think my comment was directed at you.
In general, anyone who thinks the trolley problem applies to cars (self-driving or otherwise) should probably not be driving. Or implementing self-driving. By the time the trolley problem comes up, you've made enough mistakes that the trolley problem is not the thing to worry about.
Re: (Score:2)
Urban Edition [Re:Safety settings ...] (Score:1)
"Save yo ass"
Re: (Score:2)
Good point. We should never let a self-driving car on the road which doesn't have the right setting for that option.
The correct setting is: Minimise total damage, with damage to people counting significantly higher than damage to objects. I would be willing to accept an option where damage to whoever motorist caused the situation will be weighed lower.
Option (c): Prioritize insects (Score:2)
Re: (Score:2)
The issue I have with these ethics "trolley problems", is that if the situation ever occurs, you have already failed.
Re: (Score:2)
The issue I have with these ethics "trolley problems", is that if the situation ever occurs, you have already failed.
I recommend watching The Good Place, The Trolley Problem (Season 2, Episode 5). :-)
(snippets available on YouTube)
Actually, I recommend the entire series...
Re: (Score:2)
This will never happen in real life. The old "choose to kill a bunch of school kids or take the occupant into the path of an 18 wheeler" meme isn't a real thing that any self driving car will ever consider.
Human drivers are not supposed to make that choice either. If there is a dangerous situation and you were not at fault (e.g. driving at a reasonable speed for the conditions) and there really is no way to avoid injuring someone, it's not your fault if you simply do your best to stop. No legal requirement
Re: (Score:2)
This is the silliest idea. By the time AI gets good enough to remotely consider occupants vs. pedestrians, it will already be avoiding so many more accidents its net effect is vastly beneficial.
You sould like a guy arguing he shouldn't be forced to wear a seatbelt because he might crash into a swamp and drown because he was strapped in. Meanwhile such are far more likely to be knocked out and definitely drown.
Re: (Score:2)
You also sound like the opposing lawyers suing my buddy's wife as part of a scam. They had a car load of people pull out and turn right, right in front of her. The lawyers claimed she was responsible to pay them because she could have avoided them by swerving into oncoming traffic.
Re: (Score:2)
(And I *always* wear my seatbelt.)
I assume it will be better, but when it's not... (Score:2)
I assume that when I eventually voluntarily drive in a level 4 or 5 AV it will be better than a human driver. I have no problem accepting that.
My quibble is what happens when it fails. If I die in a Waymo car when the Waymo computer was driving, does my family get to sue Waymo for a nice chunky payout to make up for their loss of my wages, burial expenses, pain, suffering, etc.? If so, can they handle a couple hundred of these per year? Because if they become a market dominator and reduce annual American ro
Re: (Score:2)
You simply factor in the cost of the payouts you are going to need to make into whatever you charge. In most places insurance is a requisite for driving a car anyway. I would imagine using the self driving feature will still require insurance. I don't see the problem.
Re: (Score:2)
Balanced test (Score:1)
Missing human hysteria... (Score:2)
We humans are hugely biased about perceived risks. People seem about more concerned about being eaten by a shark or struck by lightning than they do about being killed in a car wreck despite those being 10,000, and 1,000 times less likely respectively. Being killed by a Johnny Cab is likely going to be treated in a similar disproportionately irrational fashion. So, even if it will cost us many lives, I expect it will not be until the public can be convinced that autonomous cars are at least 100x safer th
Re: (Score:2)
Construction zone challenge (Score:3)
Re: (Score:2)
pull over to the side of the road and die in deat (Score:2)
pull over to the side of the road and die in death valley or get stuck in an cold snap vs limp to an safe area.
Let an fake cop pull over and rape someone.
Re: (Score:2)
And so far as getting hijacked goes: been talking about that for years now. It'll be the new hobby of criminals: remotely hijacking SDCs.
The whole thing is just such a total shitshow.
not just hijacking but pull over for any cop even (Score:2)
not just hijacking but pull over for any cop even rent a cop ones?
Re: (Score:2)
Re: (Score:2)
and when an private security company says for our sites we need local pull over / command of SDC's and ford just says ok?
Or when the SDC just needs signal from the same device that the cops and fire use to change traffic lights to go into pull out of way mode?
Re: (Score:2)
Re: (Score:2)
If you search for "Tesla FSD beta" on Youtube, you can find videos of this scenario with Tesla's self-driving software.
Simulations don't mean a damn thing (Score:2)
Re: (Score:2)
At some point many antis will call for legiation outlawing human drivers once ai gets good enough.
Like robot surgery, is human driving even ethical anymore at that point?
Let's see (Score:2)
Looks like Tesla found a few edge cases when their cars ran into a fire truck at 60mph, how many other edge cases are these companies going to find when they scale their software up?
Re: (Score:2)
While true, human drivers appear to have issues with parallel processing. It turns out they're really only good at temporal multithreading but the thread switching behavior is terrible at real time performance guarantees. Sometimes the 'thread_phone()' maintains the lock too long and interrupt handlers like `isr_kids()` have poorly bounded resource requirements. The development team behind 'thread_phone()' is adamant that their thread's resource consumption isn't a problem despite it consuming at least 10 t
Easiest thing (Score:2)
Not saying it doesn't have merit (I've no idea, haven't rtfa) but it's kinda trivial to overfit a model, with the result that outcomes in these comparatively few cases become better. There's almost no way not to overfit and not to overmilk past data. An overfitted model won't help on unseen cases.
Also, how about close calls and just regular driving where nothing happened, does the software cause at least a few accidents where the human drivers didn't?
Re: (Score:2)
The computer cars can very plausibly not cause accidents.
What causes accidents is unsafe driving. The computer car isn't going to get impatient and drive up your ass. A spider isn't going to come down out of the headliner and land on the computer car's face. (If one should occlude a camera, other cameras and sensors will hopefully compensate.)
If the cars have enough sensors and drive paranoid enough they might conceivably avoid any reasonably avoidable accident. Yeah, that sentence was mostly weasel words.
Computer or real world simulations? (Score:2)
Can anyone confirm which? I think there's a huge difference there.
The same Waymo? (Score:2)
Thanks! (Score:2)
You can prove a replacement fleet would net save deaths, perhaps a large net savings, and still have the effort ground to a halt by lawyers suing, if they can prove one of the few remaining deaths was caused by faulty ai design.
This is ironic because the law profession justifies these lawsuits as providing an increase in safety by forcing companies to compensate.
Hooray. They get mansions as the population continues to die from accidents.
Re: (Score:1)