Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi (ocregister.com) 54
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation."
The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car.
The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration.
In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco.
in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco.
Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."
The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration.
In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco.
in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco.
Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."
nda (Score:1)
Did they make her sign an nda?
Re: (Score:3)
NDA for what? I doubt she saw any technology underneath the vehicle, maybe some capacitors and the odd microcontroller chip.
Re: (Score:2)
I think OP means a non-disparagement clause.
Probably not. The story is very public and was very thoroughly investigated by public entities. What more can she say that would matter?
"I got run over and dragged by their robot and it sucked and it hurt and I hate them and Nyah!" Shrug. Doubt the lawyers would bother trying to get that in there. The faster they resolve this the faster they can get back to the streets to run over other people.
There's already crisis laws in place for this (Score:1)
Predictions:
1. Self driving cars grow to the majority of the miles driven on the road
2. Major flaws discovered in self driving cars, their integration, government monitoring, etc.
3. Entire country's mobility reduce to 10% of normal due to the flaws
Presidential economic emergency state executive order, Congress passes another law to exempt autonomous car manufactures from liability and put any injured persons in a death by mediation panel position.
During the 2020 pandemic, the federal government passed a law
Re: (Score:2)
The NDA would cover the exact amount of damages, what other terms were agreed to such as no further claims that might arise in the future, or the existence of a non-disparagement agreement, or more details about what Cruise said or did in the negotiations, or during or after the accident.
Seems like an edge case (Score:2)
You'd think the engineers would have considered the case where stage 1 (don't hit anyone in the first place) failed and had a selection of options available for stage 2 (wait for help) that didn't include continuing to drive with a human on or under the outside of the vehicle.
Re: Seems like an edge case (Score:2)
Stage 1, as you put it, was literally unavoidable in this case. Stage 2 was a case where a human driver likely wouldn't have even been aware of --there have literally been cases where people have been dragged for miles without the human driver being aware until somebody else flagged them down.
I really don't get why people place such a heavy blame on driverless technology over this, particularly considering the whole incident was caused by a human driver with a guilty mind.
Re: Seems like an edge case (Score:5, Insightful)
You wouldn't necessarily know you had them pinned under your car. But a human might notice bits of human still poking up over the hood, hear a scream, notice the car bump only once if a front tire went over someone but not the rear, etc. and consider the possibility.
I'm betting the autodrive system doesn't have sufficiently broad capabilities to take that input and infer the presence of a human in danger of further harm. It certainly can't stop, get out, and determine the best way to proceed after examining the environment from a different perspective never mind provide even rudimentary first aid.
On the other hand, it's probably less likely to hit someone in the first place, and almost certainly not going to take off to escape the consequences of the accident.
Re: (Score:3)
The car noticed and braked hard and hit anyway. A human who did that would not then drive the car to the side of the road to a safe(ish) area because the human was still there, somewhere. You wait and see and then move the car only if the person isn't under it.
Re: (Score:2)
And my usuual question, is this car still safer than human drivers? So sidelining it for improvements delays and thus increases deaths?
All I know is money has to wormhole into lawyer pockets somehow.
Re: (Score:2)
And my usuual question, is this car still safer than human drivers?
We don't have the data required to answer that question. Waymo and Cruise don't release it.
Re: (Score:2)
>The car noticed and braked hard and hit anyway
Given the circumstances, I don't think I blame the autodrive... yet. It likely did as well or better than a normal human driver. It would be nice to extend its environmental awareness to predict potential issues caused by other vehicles. If it had predicted the other car was going to hit a pedestrian, it might have braked earlier until the accident outcome was determined and it was safe to continue.
Hell, in that case it should be calling 911 and if it has
Re: (Score:2)
How do you see the robot doing better than a human in this case? It did as bad as it possibly could have.
Everyone seems to agree the hit was unavoidable by either robot or human. Ok, equal on that part.
Then the robot did the worst thing possible. It dragged her 20 feet.
A human may or may not have done that. Most people would stop and get out to see wtf was going on, not drive 20 more feet while a woman is potentially under their car.
This one goes to humans. Hands down.
Re: (Score:3)
Re: Seems like an edge case (Score:4, Insightful)
Of course. Human beings and many other animals will probably always deal with novel situations far better than any algorithm.
Re: (Score:2)
Human beings and many other animals will probably always deal with novel situations far better than any algorithm.
And the algorithm will deal better with the normal stuff that is a hundred times more common.
The biggest reason SDVs are safer than humans is response time.
When something goes wrong in traffic, a SDV reacts in a millisecond. A human takes 1.5 seconds from the incident happening until their foot starts to depress the brake.
At 60 mph, 1.5 seconds is 132 feet.
Re: (Score:2)
True, but humans usually start that 1.5 seconds BEFORE the incident. We slow down when things get different, rather than when we realize a problem has happened.
In this case, the SDV probably ignored the fact that the pedestrian was hit by another car and only started to slow after the SDV hit the pedestrian.
A human most likely would have hit the brakes when the other car hit the pedestrian, rather than waiting till she bounced in front of the SDV.
Re: (Score:2)
This is programming, and it will inevitably improve. I still don't think we're anywhere near the level of capability required for allowing these vehicles on the road in places with frequently bad weather or construction.
Even things like the Tesla autopilot where you're supposed to sit behind the wheel ready to take over if the AI fails... that shouldn't be allowed either. If the system isn't safe, then it should be advisory only. Though things like adaptive cruise seem safe enough.
Re: (Score:2)
It will improve over time, sure. Using live humans as unwilling guinea pigs on real streets.
Not ok.
Re: (Score:2)
Your logic is faulty. You are attempting to create the perfect self driving car software. We do not need that, nor would we want it.
The question is not whether the current software is capable enough for bad weather etc.
Instead, the question is whether the current software is better than the current average driver. Truthfully, it should be better than the average NEW driver, as we let teenagers drive. But given the assumption that new drivers will improve quickly, it is reasonable to set the bar at curre
Re: (Score:2)
Re: (Score:2)
Instead, the question is whether the current software is better than the current average driver.
We don't have the data necessary to answer that question.
Re: Seems like an edge case (Score:2)
We are not talking about self-driving cars driving without causing damage anymore, but self-driving cars cleaning up the mess of human drivers.
And whoever says âoea human wouldnâ(TM)t": A human shouldn't. But whoever says they wouldn't have done this, they are lying to themselves.
Re: Seems like an edge case (Score:2)
Re: (Score:2)
Re: (Score:2)
In this case, the SDV probably ignored the fact that the pedestrian was hit by another car and only started to slow after the SDV hit the pedestrian.
If Cruise had released the video or data from the incident, then we could answer that question.
Re: Seems like an edge case (Score:2)
Re: (Score:2)
If you include "pikachu face" in the favourable outcomes, perhaps. Otherwise your statement only holds if we are talking about situations where humans have enough time to process.
Re: (Score:2)
You wouldn't necessarily know you had them pinned under your car. But a human might notice bits of human still poking up over the hood, hear a scream, notice the car bump only once if a front tire went over someone but not the rear, etc. and consider the possibility.
Particularly given the time and distance involved in this case. The car's computer had recognized that it hit a pedestrian (and again, in a way that was totally unavoidable, even by the absolute best human driver imaginable.) Unlike the other driver who caused the incident, it pulled over. It was the act of pulling over when this person was dragged, about 20 feet at 7mph. Given the speed and distance, we're talking two seconds. Let's go ahead and add another second for (very slow) acceleration and breaking.
Re: (Score:2)
Humans tend to hit the brakes when something out of the ordinary happens nearby. Such as a woman flying in the air after getting hit by another car nearby.
The robot doesn't recognize that situation and doesn't take action until she's already directly in front of it.
A human may not have hit her at all, much less dragged her 20 feet.
Btw, 20 feet is roughly 1.5 car lengths. Cars around typically about 14 feet depending on make, model etc, and obviously will vary greatly between a mini cooper and a big ass SU
Re: (Score:2)
Re: (Score:2)
Humans tend to hit the brakes when something out of the ordinary happens nearby. Such as a woman flying in the air after getting hit by another car nearby.
There are many reasons why this is highly unlikely.
1) https://exploringyourmind.com/... [exploringyourmind.com] (In fact, I'll bet this is why the pedestrian was hit by the first car)
2) Experienced drivers ultimately look into the distance, less so the periphery unless they're looking for something in particular. Another poster mentioned deer and a bouncing ball for instance -- sure, in the distance, right ahead of you. But off to the side, especially if obstructed by the passenger side A pillar? Unlikely.
3) https://www.youtube.co [youtube.com]
Re: (Score:2)
I agree the first driver is the at-fault here. For the initial hit anyway. If the cruise hasn't dragged her it would be off the hook, entirely but the cruise is responsible for that dragging.
Do you think human drivers in the general case wouldn't notice a body flying around?
It's anecdotal but I've been driving for 40+ years; it's been my experience that people generally hit their brakes when -anything- happens especially if a nearby car does something (like bouncing a woman pedestrian).
My very personal ex
Re: Seems like an edge case (Score:2)
It's quite situational, but remember the gorilla experiment. Also how high in the air did the pedestrian go? Sounds like she just rolled over the hood or the fender. If she did anything more than that, getting dragged 20 feet would likely be the least of her problems.
Re: (Score:2)
I'm familiar with the gorilla experiment but a classroom is a very different situation. Entirely different part of the brain is engaged watching a professor on stage vs driving 4000 lbs of steel on the streets where death and injury are an option.
Hah, if it was me in class not only would I have not seen the gorilla, I wouldn't have noticed the professor either in most of my classes. For the first 2 classes everyday I sat in the back and read the newspaper. I don't do that when driving :-)
Re: (Score:2)
The robot doesn't recognize that situation and doesn't take action until she's already directly in front of it.
You're making the assumption that the car can't predict when an object moving towards its path may cross its path, and only reacts once the object crosses its path. Do you know this for certain?
Re: (Score:2)
She rolled over the hood of the car one lane over.
I'll bet my life vs yours that the cruise did not detect her on the hood and start braking when she was first hit and that first car did whatever about it.
I'm absolutely certain no one at cruise programmed or trained it in anyway to recognize a woman in the hood of the next car over as a potential problem it should avoid.
Do you know otherwise? That's not what the official timeline of events says happened. Maybe everyone lied.
Re: (Score:1)
Humans tend to hit the brakes when something out of the ordinary happens nearby. Such as a woman flying in the air after getting hit by another car nearby.
The robot doesn't recognize that situation and doesn't take action until she's already directly in front of it.
A human may not have hit her at all, much less dragged her 20 feet.
Btw, 20 feet is roughly 1.5 car lengths. Cars around typically about 14 feet depending on make, model etc, and obviously will vary greatly between a mini cooper and a big ass SUV but 14ft is a fair average number.
Congratulations, you managed to construct the once-in-a-hundred-million-miles situation where indeed the robot AI will perform worse than an average driver. Which doesn't change one iota the fact that it's still a much better driver than humans in 99 out of 100 dangerous situations.
Re: (Score:2)
1 in 100,000 million? Really?
Okey dokey.
And 99.847% of all statistics are made up on the spot.
Humans drive a zillion miles every day for the last 100 years avoiding all sorts of shit in all sorts of situations we don't let robots even get into and none of those avoided events is reported. You have zero basis o compare robots in well mapped and semi controlled environments vs humans doing -everything- in -every- possible environment on a daily basis or any reason whatsoever to believe robots are better dri
Re: Seems like an edge case (Score:2)
Re: (Score:2)
So the car is "almost certainly not going to take off to escape the consequences of the accident"...
Nobody was going to hold the car responsible anyway. The owners, however, did try to escape, by the sound of it.
Re: (Score:2)
You wouldn't necessarily know you had them pinned under your car.
Spot the person who's never driven a sports car... or probably anything other than an SUV... You'd absolutely notice a person or other animal stuck under your car as it will increase drag and friction... Quite significantly if it's not a massively oversized penis replacement.
In a normal hatchback (Toyota Corolla or similar), a person would require significant acceleration to be applied in order to overcome them. Even a high powered, AWD Subaru WRX STI wouldn't just be able to roll over a body.
This is
Re: (Score:2)
And presumably those drivers were sued too. In this case, an "entity" was sued for dragging a person and that entity just so happened to be the manufacturer of the autonomous software. So this case isn't really any different from those human driver cases.
Re: (Score:2)
Stage 1, as you put it, was literally unavoidable in this case. Stage 2 was a case where a human driver likely wouldn't have even been aware of --there have literally been cases where people have been dragged for miles without the human driver being aware until somebody else flagged them down.
I really don't get why people place such a heavy blame on driverless technology over this, particularly considering the whole incident was caused by a human driver with a guilty mind.
I suspect a lot of those drivers were either impaired (by substance or age) or trying to pull off a hit and run.
The real issue for the self-driving car is memory. If you hit a person and they fall down you know you just hit someone and they're probably lying in front of the vehicle, so you're unlikely to keep on driving, even to pull over, until you figure out where they are.
From everything I've seen these self-driving systems don't really have memories. They remember a bit in the form of object tracking, b
Re: (Score:2)
You'd think the engineers would have considered the case where stage 1 (don't hit anyone in the first place) failed and had a selection of options available for stage 2 (wait for help) that didn't include continuing to drive with a human on or under the outside of the vehicle.
I don't think that technology is even close to the point where the function activateFoolProofPedestrianDetection() is implementable yet. The self driving car evangelists have been promising us for years that 99,999% accident free fully self driving cars are just around the corner. However, every time these AI Bros think they are within sight of solving self driving car AI some new edge case that's somewhere between hard and impossible for an AI to detect and solve pops up. The problem with this is that the
Re: Seems like an edge case (Score:2)
Ridiculous (Score:1)
Re: (Score:2)
Re: (Score:2)
Glad I live in a country where there's no such thing as punitive damages, and only actual damages are awarded (with serious injury and loss of income, that can still get into the millions). You do get some for mental anguish, but those amounts are in the 4-6 figure range, not millions.
Re: (Score:3)
Guess what?
The future of self-driving cars of all kinds is that the corporation behind it takes liability, and the "driver" (who will not be legally driving, eventually) takes none of the responsibility.
That's what they're aiming for, that's what they want.
Does that mean that a) they will just have insurance they'll pass onto customers in a monthly subscription which will but they way our of their software's mistakes? Or b) the software's licence to operate will be rescinded nationwide until the bug is pro
As usual (Score:2)
Re: (Score:2)
You're assuming that the human driver had insurance. Or a license.