Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transportation AI

Cruise Suspends All Driverless Operations Nationwide (apnews.com) 139

GM's autonomous vehicle unit Cruise is now suspending driverless operations all across America.

The move comes just days after California regulators revoked Cruise's license for driverless vehicles, declaring that Cruise's AVs posed an "an unreasonable risk to public safety" and "are not safe for the public's operation," also arguing that Cruise had misrepresented information related to its safety. And the Associated Press reports that Cruise "is also being investigated by U.S. regulators after receiving reports of potential risks to pedestrians and passengers." Human-supervised operations of Cruise's autonomous vehicles, or AVs, will continue — including under California's indefinite suspension...

Earlier this month, a Cruise robotaxi notably ran over a pedestrian who had been hit by another vehicle driven by a human. The pedestrian became pinned under a tire of the Cruise vehicle after it came to a stop — and then was pulled for about 20 feet (six meters) as the car attempted to move off the road. The DMV and others have accused Cruise of not initially sharing all video footage of the accident, but the robotaxi operator pushed back — saying it disclosed the full video to state and federal officials. In a Tuesday statement, Cruise said it cooperating with regulators investigating the October 2 accident — and that its engineers are working on way for its robotaxis to improve their response "to this kind of extremely rare event." Still, some are skeptical of Cruise's response to the accident and point to lingering questions. Bryant Walker Smith, a University of South Carolina law professor who studies automated vehicles, wants to know "who knew what when?" at Cruise, and maybe GM, following the accident.

Also earlier this month, the National Highway Traffic Safety Administration [or NHTSA] announced that it was investigating Cruise's autonomous vehicle division after receiving reports of incidents where vehicles may not have used proper caution around pedestrians in roadways, including crosswalks. The NHTSA's Office of Defects Investigation said it received two reports involving pedestrian injuries from Cruise vehicles. It also identified two additional incidents from videos posted to public websites, noting that the total number is unknown.

In December of last year, the NHSTA opened a separate probe into reports of Cruise's robotaxis that stopped too quickly or unexpectedly quit moving, potentially stranding passengers. Three rear-end collisions that reportedly took place after Cruise AVs braked hard kicked off the investigation. According to an October 20 letter that was made public Thursday, since beginning this probe the NHSTA has received five other reports of Cruise AVs unexpectedly breaking with no obstacles ahead. Each case involved AVs operating without human supervision and resulted in rear-end collisions.

Cruise emphasized on Twitter/X that their nationwide suspension of driverless testing "isn't related to any new on-road incidents." Instead, "We have decided to proactively pause driverless operations across all of our fleets while we take time to examine our processes, systems, and tools and reflect on how we can better operate in a way that will earn public trust."

Their announcement began by stressing that "The most important thing for us right now is to take steps to rebuild public trust."
This discussion has been archived. No new comments can be posted.

Cruise Suspends All Driverless Operations Nationwide

Comments Filter:
  • by fluffernutter ( 1411889 ) on Saturday October 28, 2023 @05:50PM (#63962358)
    I'm glad they are recognizing situations where an ancient was caused because the AV just doesn't drive human enough. For this to be safe, AVs will need to blend seamlessly with people.
    • Either you're having a stroke or this is something ChatGPT generated.

    • Re:people (Score:5, Insightful)

      by 93 Escort Wagon ( 326346 ) on Saturday October 28, 2023 @05:57PM (#63962374)

      I think people are finally realizing the whole "move fast and break things" ethos isn't an acceptable model for a large range of endeavors (autonomous vehicles being an obvious one; medical treatments being another). A startup company's profits shouldn't be built on an idea that some number of human injuries or deaths is acceptable.

      • To be fair, driving itself is based on the idea that some risk is acceptable for mobility.

  • by at10u8 ( 179705 ) on Saturday October 28, 2023 @05:59PM (#63962378)
    Translation: Our lawyers told us that if another incident like this happens our liability could be unlimited.
    • If anyone remembers that McDonald's Hot coffee lawsuit the reason why the payout was so large was because McDonald's was already aware that there was a risk of severe burns prior to the incident that was sued over. Penalties increased drastically if the business in question is aware of the risks
      • If anyone remembers that McDonald's Hot coffee lawsuit the reason why the payout was so large was because McDonald's was already aware that there was a risk of severe burns prior to the incident that was sued over. Penalties increased drastically if the business in question is aware of the risks

        Some grifts are timeless.

        https://www.cnn.com/2023/10/25... [cnn.com]

    • Yeah, it was expected.

      It seems every time self driving cars have been in the news for problems, it was a Cruise car. Read about traffic jams, cars piling up, intersections blocked, and Cruise was the cause.

      The other companies seem to have solved the issues, and based on videos that occasionally make the rounds, are far better than human drivers not just during an unexpected swerve but at detecting and avoiding them before they take swift responses.

      • It seems every time self driving cars have been in the news for problems, it was a Cruise car...The other companies seem to have solved the issues,

        100% false. You just haven't been paying attention.

  • by angryargus ( 559948 ) on Saturday October 28, 2023 @06:07PM (#63962394)

    Cruise was definitely less skilled than Waymo and dragged down the image of both companies, but neither company provides a straightforward way for the public to provide feedback or report active problems (e.g., akin to "how's my driving?").

    They do provide a phone # to first responders, which apparently isn't good enough since they still resort to breaking the windows of these cars in emergency situations. Cruise has a unique name on each car, but Waymo doesn't even bother with that. In the case of Waymo the waymonauts I've spoken with think it's quite sufficient to have a passive web form on their website. Neither company's safety drivers seem to make reports of when their vehicles do something that's in violation of the law (I doubt the drivers have any training or familiarity with the state's vehicle code).

    • by iAmWaySmarterThanYou ( 10095012 ) on Saturday October 28, 2023 @10:38PM (#63962708)

      Of course waymo has nothing more than a web form.

      Have you ever tried to contact a real human with authority at google to get customer service or report a real problem?

  • by joe_frisch ( 1366229 ) on Saturday October 28, 2023 @07:09PM (#63962464)
    Cruise has been operating taxis in SF now for a while. There must be enough data on total damage / injury per passenger mile to make a fair comparison with human drivers. Its not surprising that automated vehicles will may different mistakes than humans, the safety issue is based on the overall rates.
    • by quantaman ( 517394 ) on Saturday October 28, 2023 @09:09PM (#63962610)

      Cruise has been operating taxis in SF now for a while. There must be enough data on total damage / injury per passenger mile to make a fair comparison with human drivers. Its not surprising that automated vehicles will may different mistakes than humans, the safety issue is based on the overall rates.

      They put out this study [getcruise.com] a month ago that claims drastically better safety (94% fewer crashes than human where the AV was the primary contributor). I'm sure they influenced the metrics/design of the study, but I don't doubt the AVs cause fewer accidents. I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).

      • Wish I could mod up. Data is a great addition to the discussion
        • Cruise just got kicked off the road because they lied to the DMV. Why do you believe they aren't lying to you?

          Their cars need a safety driver.
      • I just got a few of the local youngsters who drive stolen cars round town for kicks to do a study on their own driving record and they, too, reckoned they were 94% better at driving than the ordinary driver.

      • I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).

        You're wrong. The halt is because they lied to the DMV. They got kicked off the streets in San Francisco, and removed them in other cities before getting kicked off there, too.

        It's fine to test self-driving cars on the street, but they need a safety driver until they are ready.

      • by tlhIngan ( 30335 )

        I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).

        I think it's more because of the accident. That's why California pulled their license. I think it's definitely out of caution in that they want to know what exactly happened to cause it - and in the meantime, halting operations just in case it's a fleetwide fault is generally a good idea.

        This happens in a lot of fields - in aviation any aircraft crash has the potential to grounding the enti

    • by martin-boundary ( 547041 ) on Saturday October 28, 2023 @09:30PM (#63962640)
      There is no fair comparison with human drivers that *can* be made. A human is responsible for his or her own actions. If a taxi driver breaks the law, or kills someone, then they are personally going to catch the consequences, including possible jail time. An AI is just an appliance, what are we going to do when it drives over a person, drags it around for a bit and sits on it making screaming noises? We can turn it off. Its' always satsifying to turn off an appliance, I guess. Will there be punishment? How about jail time for someone? No? Then there is no human comparison possible. Our society depends on deterrence and individual learning to correct errors, machines cannot be deterred and do not learn from their own mistakes, only from specially curated training sets and trivial simulated environments.
      • by joe_frisch ( 1366229 ) on Saturday October 28, 2023 @09:39PM (#63962650)
        Presumably the reason we "punish" human drivers for accidents is in order to reduce the number of accidents they cause - its the way we "program" humans. An AI can be programmed in more efficient ways. It easy to set up the right motivation for the AI developers by having the companies need to pay to insure their vehicles for liability in the case of accidents. AI most certainly can learn from its errors - at least in the collective sense that the data collected is used for training, and possibly in the individual sense as well, depending on the technology
        • AIs are not programmed in more efficient ways, because there are no datasets that would make this possible. The datasets used are curated and censored to limit whole classes of failures which show up in the wild. That's what human supervision effectively does, and that's what driving in limited conditions on quiet backroads does, for example.

          I agree with you that there is a need to properly motivate the designers and owners of the AI appliances.

        • by iAmWaySmarterThanYou ( 10095012 ) on Saturday October 28, 2023 @10:52PM (#63962726)

          If I run someone over I can do real jail time for vehicular manslaughter.

          If a robo car runs over the same person in the same circumstances, who goes to jail? No one.

          Once someone at the company has the same risk and responsibility as I do then no problem having robo cars on public roads, there is zero chance any of these cars would be out there if a company exec was at risk for jail and a felony conviction.

          • by Jeremi ( 14640 )

            With companies, the risk isn't going to jail, it's losing a lawsuit and having to payout huge amounts of damages.

            Not that either outcome is particularly relevant as a motivation; these companies aren't trying to be reckless, it's just that autonomous driving is a difficult problem to solve with 100% reliability. Holding onerous consequences over their heads isn't likely to make them perform better than they would have otherwise.

            • It is not likely to make them better, but it is likely to make them stop overstating the capabilities. The roads self driving car technology, laws today are not ready for the experiment with public lives. This is what will stop if execs are threatened with jail time in the case of human injury, death etc.

              Fines, company shutdown etc. are no deterrent: fines come from investor money, the execs can always find another job.

    • The real fuckup here was that Cruise hid information from the DMV. Because of that, it would be foolish to trust any numbers that they produce.

      Self-driving cars testing on the street is fine, but they need to have a safety driver.
  • So what happens when a driverless car gets into a collision and you are injured?

    Do you get to sue the manufacturer, the owner of the vehicle, or the other victim that had the temerity to collide with your robot-driven tank?

    In the near future, when you are ran over by a self-driving Uber, they can argue that their 2 person legal defense team is too busy being sued by 10 other victims and that your case would have to be scheduled sometime in 2050. Do you think a judge can force Uber to hire more lawyers? Or w

    • Fortunately court scheduling doesn't work that way.

      Anyway, the real problem isn't who to sue. The problem is when they kill someone no one goes to jail but when a human does the exact same thing they get hit with a felony vehicular manslaughter charge and do time.

      • by Jeremi ( 14640 )

        The problem is when they kill someone no one goes to jail but when a human does the exact same thing they get hit with a felony vehicular manslaughter charge and do time.

        Do, they, though? Maybe if it can be shown that they did it on purpose, or if they were driving drunk, or they fled the scene of the accident.

        If (outside of hitting someone) they "do the right thing" (i.e. pull over immediately, call 911, aid the victim as best they can, are honest with the police about what happened), they likely won't be prosecuted; at worst their insurance rates will go up or they'll lose their license.

  • by AlanObject ( 3603453 ) on Saturday October 28, 2023 @07:36PM (#63962504)

    I happen to believe that Tesla is on the right track with regard to their driver-less technology road map. Instead of writing procedural code for all the cases that will arise (impossible) they are attempting to train their neural net with as much real-world data as they can grab. And they have more than anyone -- a half billion miles of beta-FSD logged.

    Yet I look at a video of where it fails [youtube.com] like this. The maker of the video is a) not a Tesla hater, and b) designs and executes a really good test case for a very basic driver-less software requirement. Not hitting a kid in the road.

    It fails. The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip! Obviously, once the figures are knocked down and under the front bumper the camera-only AI can no longer see them. It isn't aware an accident happened. It sees a clear road so tries to proceed.

    Watch the video and make your own judgement.

    I just don't see how any more neural net training would solve this if it has not already. As human drivers, we have all sorts of cues that an accident has happened or we hit something. A bump. People screaming. Honking horns. A scraping sound from under the car. Both Tesla and Cruise seem to have none of this and their driver-less car will proceed to try to complete the trip if it can because that is what is programmed to do because that is what pays. It will attempt to do so with blood on the fender because it can't see it.

    The fact that Tesla and the others are failing very basic tests like this at this late stage -- years and years after robotaxies were promised -- does not give me a lot of confidence that true self-driving cars are shipping soon.

    • by Ichijo ( 607641 )

      The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip!

      I suspect that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid, and correctly deduced from these clues that the kid wasn't real.

      A wiser car would have played along with the humans trying to fool it.

      • Then what's a better test? Asking a real kid to run across the road?

      • So you think it's ok for the car to splatter a kid and dog because it "detects" the mood of the driver?

        Wut?

      • I suspect that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid

        1 I believe, 2 I don't.

      • The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip!

        I suspect

        Wrongly, oh so wrongly.

        that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid, and correctly deduced from these clues that the kid wasn't real.

        Ok, lets ignore the insanity of #1 where you think it's fine to hit an unexpected object because the driver was seemingly paying attention.

        You think the car is deciding "oh, the driver doesn't look too concerned about the kid looking object I just hit, so I'll drive right over it".

        I'll tell you exactly what happened in that test.

        The Tesla weirdly under reacted to the unexpected obstacle in the road, which a human driver would also likely do, but it's a big disappointment since instant

    • Seems like the limiting factor that is going to be that final difficult push to making these viable is emulating the vast amount of not just visual and auditory information but the mass amount of context to that information that human brain is able to deal with in short order that can lead us to act on information that even our eyes and ears don't necessarily perceive.

      Since we all inherently operate that way sometimes it's easy to forget just what a massively complex instrument we are working with, irreplac

    • Great post, thank you.

      their driver-less car will proceed to try to complete the trip if it can because that is what is programmed to do because that is what pays. It will attempt to do so with blood on the fender because it can't see it.

      It occurs to me there's a similarity between the above and what chatbots do that's labeled "hallucinating", which is that the chatbots spew lies/bullshit just as confidently as they "recite facts"... similar to the car just deciding to go forward. And the reasons are basically the same: neither system has a true understanding of what's transpiring.

    • The fact that Tesla and the others are failing very basic tests like this at this late stage -- years and years after robotaxies were promised -- does not give me a lot of confidence that true self-driving cars are shipping soon.

      Who cares? As long as the numbers add up in the AI's driving favor, that is all that matters. Kids get run over by humans all the time. What does it matter if we let an AI do it too?

      (this is not MY argument, it was forced on me)

  • by thesandbender ( 911391 ) on Saturday October 28, 2023 @07:43PM (#63962506)
    I used to race bicycles in college and one of the first things we taught new members was that children and animals (dogs) are the most dangerous things on/near a race course because you have NO idea what they are going to do. It's not possible to make a flowchart or do a predictive analysis on them. They're just going to do what they do. Watch them like hawks and be prepared to react.

    That's fundamental problem is that the current crop of AI/machine intelligence don't really think or reason like people. They have to be trained on every possible variation for every situation to have any hope of reacting like a human. And even then, what's "human"? "(Human) Driver drags pedestrian 20m while pulling over." is a news story you unfortunately do see.

    I don't think we'll have fully driverless cars driving everywhere in the near feature. I think it's going to evolve into "assisted" driving and then have designated roadways that are cleared for automated driving (with limitations on what humans can do... e.g. sidewalks with barriers). It will continue to evolve after that. Unless we put restrictions on the human side of the equation, we're not going to have automated driving everywhere anytime soon.
  • by hunter44102 ( 890157 ) on Saturday October 28, 2023 @09:02PM (#63962606)
    I was on the highway this summer and saw a landscape trailer with its tailgate down and lots of equipment that was loose and moving around as he hit small bumps. He was speeding fast and got in front of me. I knew what was going to happen so I slowed down and moved a couple lanes away. Sure enough a small mower flew out onto the highway and people behind me had to brake hard. These are the rare events where a human can figure out what's going to happen and slow down/move but a autonomous vehicle wont detect it until its too late. They have a long way to go with AI
  • I genuinely want to know how many accidents per passenger mile Cruise was encountering compared to the average human being. From what I've been hearing, it's orders of magnitude better.

    So a human got dragged underneath a car in some terrible accident. You wanna talk about accidents, when I was six years old, a friend's mom got pinned between two vehicles and suffocated to death. Accidents happen. What matters the most is objectively observing the rate at which accidents happen. And if Cruise has a lower rate of accident, then let's encourage the growth of this business, not throw the baby out with the bathwater. Let's just make sure Cruise has a good insurance policy that takes care of these situations.

    If we really want to save lives, we'd require everybody to drive smaller, lighter, and slower cars. If we're not willing to make that sacrifice, then teaching computers how to respond faster than humans to threats to human safety is the best compromise available.

    • If I ran over your mom I'd likely go to jail on a vehicular manslaughter charge.

      If a robot runs over your mom, nothing happens to anyone. I guess the family could sue. Whatever.

      A workd where people can put 3 ton automated machines in public and not be held responsible when they kill people is an ugly horrible world.

      The number of accidents/mile is irrelevant and a very nerdy way of looking at things. Your mother is still dead from a robot and you should just suck it up.

    • I genuinely want to know how many accidents per passenger mile Cruise was encountering compared to the average human being. From what I've been hearing, it's orders of magnitude better.

      You won't be able to find out because Cruise just got kicked off the road for lying to the DMV. You can't trust their numbers.

      Self-driving car research is great, but it should be done with a safety driver.

  • to lose faith in an overall technology. But yeah, "Jeem" does suck. Jeem should have to give back all of its 2008 bailout money with 15 years of interest.
  • Note the new info that the victim was dragged because the robot thought it would be a good idea to pull over after it hit her.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...