Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transportation

Feds Add Nine More Incidents To Waymo Robotaxi Investigation (techcrunch.com) 36

Nine more accidents have been discovered by federal safety regulators during their safety investigation of Waymo's self-driving vehicles in Phoenix and San Francisco. TechCrunch reports: The National Highway Traffic Safety Administration Office of Defects Investigation (ODI) opened an investigation earlier this month into Waymo's autonomous vehicle software after receiving 22 reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The investigation, which has been designated a "preliminary evaluation," is examining the software and its ability to avoid collisions with stationary objects and how well it detects and responds to "traffic safety control devices" like cones. The agency said Friday it has added (PDF) another nine incidents since the investigation was opened.

Waymo reported some of these incidents. The others were discovered by regulators via public postings on social media and forums like Reddit, YouTube and X. The additional nine incidents include reports of Waymo robotaxis colliding with gates, utility poles, and parked vehicles, driving in the wrong lane with nearby oncoming traffic and into construction zones. The ODI said it's concerned the robotaxis "exhibiting such unexpected driving behaviors may increase the risk of crash, property damage, and injury." The agency said that while it's not aware of any injuries from these incidents, several involved collisions with visible objects that "a competent driver would be expected to avoid." The agency also expressed concern that some of these occurred near pedestrians. NHTSA has given Waymo until June 11 to respond to a series of questions regarding the investigation.

This discussion has been archived. No new comments can be posted.

Feds Add Nine More Incidents To Waymo Robotaxi Investigation

Comments Filter:
  • by ndsurvivor ( 891239 ) on Friday May 24, 2024 @09:41PM (#64497529)
    As long as Self Driving has a similar accident rate to human drivers, I think it should be allowed, and encouraged. It should only get better. Human drivers on the other hand will always be the same. It is not perfect at first, but gradually, it will get better.
    • As long as Self Driving has a similar accident rate to human drivers, I think it should be allowed, and encouraged.

      It should only get better. Human drivers on the other hand will always be the same.

      It is not perfect at first, but gradually, it will get better.

      Well, the promise was that self driving cars would be as good as accident free.

      • Has any company actually promised when it would hit that goal? Didn't Musk put a date on it? I could be misremembering it but I think he alluded to that like being ready like a couple years ago, and that's ludicrous.

        I doubt Waymo or any sane company has ever made that bold a claim to be accident free by 20XX unless it's like 2045, only that it's their mission goal. If they did that's just plain silly ad should be looked at the same.

        • by Savage-Rabbit ( 308260 ) on Friday May 24, 2024 @10:57PM (#64497599)

          Has any company actually promised when it would hit that goal? Didn't Musk put a date on it? I could be misremembering it but I think he alluded to that like being ready like a couple years ago, and that's ludicrous.

          I doubt Waymo or any sane company has ever made that bold a claim to be accident free by 20XX unless it's like 2045, only that it's their mission goal. If they did that's just plain silly ad should be looked at the same.

          “I'm highly confident the car will drive itself for the reliability in excess of a human this year”

            -- Elon Musk in an earnings call, January 2021.

          Having established that one can always rely on Elon Musk to say something stupid, when did the public consent to being lab-rats in their experiments until they finally perfect their near 100% accident free self driving cars? ... whenever the hell that will be.

          • Just want to point out that "reliability in excess of a human", while strange phrasing, does not mean "as good as accident free" to me. I think I'd want to know what counts as "reliability" in this case.

            I'd consider an accident rate at 99.9999999% that of humans to be "in excess" if "reliability" includes "accident rate(lower is better)".

            Also, even Musk felt the need to disclaim the statement: "Highly confident" doesn't mean "sure".

            Personally, I think that around 10% of the accident rate of humans is abou

            • by Anonymous Coward

              Just want to point out that "reliability in excess of a human", while strange phrasing, does not mean "as good as accident free" to me. I think I'd want to know what counts as "reliability" in this case.

              I'd consider an accident rate at 99.9999999% that of humans to be "in excess" if "reliability" includes "accident rate(lower is better)".

              Also, even Musk felt the need to disclaim the statement: "Highly confident" doesn't mean "sure".

              Personally, I think that around 10% of the accident rate of humans is about the best we can expect and is decently achievable.

              Having established that one can always rely on Elon Musk to say something stupid, when did the public consent to being lab-rats in their experiments until they finally perfect their near 100% accident free self driving cars? ... whenever the hell that will be.

              Standard to maximize saving of human life shouldn't be 100% accident free, it should simply be "lower and less severe than human drivers". That's a much easier standard to get. I acknowledge that self driving cars will get into accidents that no human would ever get into*, while believing that self driving cars will avoid a lot of the most common car accidents humans get into via: inattentive driving, reckless driving, drunk driving, etc... I expect self driving cars to get into situations where an accident is unavoidable, but generally able to avoid most accidents via very good "twitch" responses.

              As for the public consenting - that was effectively done when elected officials looked at the presentation packages of the companies showing that, while not perfect, they shouldn't be any worse than actual drivers, and the elected officials decided to approve the testing.

              *Though the universe keeps generating worse idiots, so I'm prepared to be proven wrong.

              They haven't even achieved "reliability in excess of a human" like Musk told us in 2021 he was 'confident' that we'd be getting by 2022. So how long are we supposed to shut up like good little peasants and be lab rats in Elon and the tech bro brigade's self driving car experiments before they finally get the self driving car accident rate down to one tenth that of human drivers? There is a limit to how much shit I'm willing to take from anybody, especially arrogant charlatans like Elon Musk and the rest of

              • They haven't even achieved "reliability in excess of a human" like Musk told us in 2021 he was 'confident' that we'd be getting by 2022. So how long are we supposed to shut up like good little peasants and be lab rats in Elon and the tech bro brigade's self driving car experiments before they finally get the self driving car accident rate down to one tenth that of human drivers? There is a limit to how much shit I'm willing to take from anybody, especially arrogant charlatans like Elon Musk and the rest of these tech bros.

                I think you're mixing up two measurements I proposed. The 10% rate is what I think is achievable. My standards for actually deploying it is actually only "less than humans", which is pretty much an order of magnitude easier to achieve. I have seen nothing that shows that the latter standard hasn't actually been met. There's reliability and cost issues otherwise, but from what I've read, their accident rate is "acceptable".

                Also, Musk being wrong just means Musk is wrong. Google/Waymo is reportedly much

                • They haven't even achieved "reliability in excess of a human" like Musk told us in 2021 he was 'confident' that we'd be getting by 2022. So how long are we supposed to shut up like good little peasants and be lab rats in Elon and the tech bro brigade's self driving car experiments before they finally get the self driving car accident rate down to one tenth that of human drivers? There is a limit to how much shit I'm willing to take from anybody, especially arrogant charlatans like Elon Musk and the rest of these tech bros.

                  I think you're mixing up two measurements I proposed. The 10% rate is what I think is achievable. My standards for actually deploying it is actually only "less than humans", which is pretty much an order of magnitude easier to achieve. I have seen nothing that shows that the latter standard hasn't actually been met. There's reliability and cost issues otherwise, but from what I've read, their accident rate is "acceptable".

                  You still haven't told us why it's entirely acceptable that these bozos are allowed to test pre-Alpha versions of their self driving cars on unsuspecting members of the public?

                  p>Also, Musk being wrong just means Musk is wrong. Google/Waymo is reportedly much further along with their efforts.

                  Reportedly? Waymo won't even disclose how many of their self driving cars they have on the roads or any details of their accident rates and now they are withholding accident data from the Federal Government. This brings me right back to the questions why is it acceptable for Waymo to be allowed to test pre-Alpha versions of their self driving cars on unsuspecting members of the public?

                  • You still haven't told us why it's entirely acceptable that these bozos are allowed to test pre-Alpha versions of their self driving cars on unsuspecting members of the public?

                    Simple: It isn't true, nor is it an assertion that I've made. Why should I attempt to prove a negative?
                    "these bozos" aren't testing pre-Alpha versions. They're testing somewhere around Beta. The members of the public shouldn't be "unsuspecting", as we know they're out there. It's only relatively recently and in limited areas that they started shedding their safety drivers.

                    Reportedly? Waymo won't even disclose how many of their self driving cars they have on the roads or any details of their accident rates and now they are withholding accident data from the Federal Government.

                    It's around 400-600, though I imagine the exact number is constantly changing, which is why stating a number isn't seen as necessary

                  • There will come a time in any testing cycle when synthetic data is no long good enough and you need real data. That has to be balanced against safety. I see nothing here to make me think they are getting the balance wrong.
          • I can barely parse that statement of his.
    • I agree totally, if this is within human rates and not anything particularly egregious it's just part of the learning process. We already have an entire system to deal with accidents, this doesn't change any of that currently.

      I would say that human drivers don't have to always be the same, especially in America. I know every driver doesn't think they're the problem on the roads so I very well could be biased but come on, I imagine most people here are responsible motorists and accidents happen based on my

  • by Required Snark ( 1702878 ) on Friday May 24, 2024 @10:41PM (#64497585)
    I was waiting to cross a major thoroughfare in an area where small autonomous delivery carts are roaming the sidewalks. Suddenly dozens of emergency vehicles roared down the street, lights flashing and sirens wailing.

    If one of those moronic little carts had been crossing the street and it got hit a disaster could easily occur. The cart itself could go flying into someone and possibly kill them. The emergency vehicles could loose control and crash, hurting the occupants or someone else and not make it to where they were going.

    Something of that ilk will never be smart enough or fast enough to avoid that kind of situation. It would be too expensive because the economic model demands a really cheap system. It's not a matter of if this happens, but when it will happen.

    If the CEO/CFO/CTO ware personally liable for negligent homicide they might consider the risk, but short of that someone will be killed. Hell will freeze over before any executive will face jail time no matter who get hurt.

    • I don't think so. We are at a point in technology where we can simulate a portion of the human brain. I think it would just take a few Navidia processors to "think" and make decisions based on sounds and video inputs in order to be as adept, or better than, most human decision makers, when it comes to carts for sure.
      • by iAmWaySmarterThanYou ( 10095012 ) on Saturday May 25, 2024 @03:38AM (#64497793)

        You vastly underestimate the power of the human brain. They are not simulating anything like a part of the human brain. We don't even know how the brain really works. We have a vague idea and a lot of theories and conducted many experiments that have taught us high level concepts and what certain areas are generally used for, sort of, but not always.

        The ability of any of these AI to think is zero. A fruit fly is smarter.

    • If liability is enforced, then liability insurance carriers would have every reason to require testing and standards.

    • There's actually a fairly easy technological response to this: Many of these vehicles already have what's effectively a remote to change traffic lights so that their path should be clear when they reach it. It's normally infrared. So you could easily have a system where they're blasting a signal down the road as they travel, and the carts all have sensors that ensure they pick that up. If they pick it up, there's a simple directive: Don't enter the road and/or get the hell out of it.

      On the other hand,

    • by cstacy ( 534252 )

      If one of those moronic little carts had been crossing the street and it got hit a disaster could easily occur. The cart itself could go flying into someone and possibly kill them.

      Well, everyone said they wanted flying cars...

  • Strange (Score:3, Insightful)

    by backslashdot ( 95548 ) on Saturday May 25, 2024 @01:33AM (#64497703)

    40,000 humans murdered by human-driven vehicles just in the USA (1 million worldwide) .. yet there is zero investigation or "concern". Why is your empathy or sense of "we ought to fix that" gone in those cases? How is it logical? Somebody fucking died and you motherfuckers don't give a shit.

    • Re: (Score:3, Insightful)

      There is great concern about automobile death and every single one is investigated. Why do you think they're not? A great number of those investigated result in criminal charges.

      When the CEO of an autonomous car company carries the same legal liability as a human driver and can go to jail, let us know. Until then, keep your fucking beta level shit robot cars off the streets. I did not volunteer to be a guinea pig so some sociopathic assholes could get richer at my risk and expense.

      • "When the CEO of an autonomous car company carries the same legal liability as a human driver and can go to jail" .. uh .. they do already .. like today bro. You can't deliberately do something you know will cause an accident. Like all humans including human drivers, they're only liable if they do something a reasonable person could have foreseen would cause an accident. In the case of FSD simulations, tests, and experts have shown it is safer than human. If you made the standard higher than that, you'd hav

      • By the way, do you live in a metro area? If so, can you tell me the circumstances of any fatal accident that happened within the last year? Because so far FSD has led to exactly one fatality (there were some autopilot deaths, but those were proven to be driver negligence). In fact FSD has led to at least one life being saved https://www.youtube.com/watch [youtube.com]?... [youtube.com]

        • https://www.nbcnews.com/tech/t... [nbcnews.com]

          Is it ok because she wasn't killed?

          There are numerous fatal and badly injured auto pilot crashes.

          There are numerous other fuckups like the cars piling up unable to figure out how to get out of a residential side street in SF like pigeons stuck in a magnetic anomaly.

          There was the guy on autopilot on the 101 that was driven into a cement barrier at speed.

          There are any number of robot cars that have slammed into e regency vehicles or ignored police tape or driven over downed el

          • Ok, letting you know now that if someone at a car company does something with negligence or deliberation that they know and any reasonable person could predict an accident would occur they would go to jail. If there's an unforeseeable situation that caused an accident then they won't go to jail. You realize that cars have mechanical failures all the time too right, for which nosy goes to jail. Here's one reference of many: https://www.lieffcabraser.com/... [lieffcabraser.com]

            • nosy = nobody (autocorrected)

            • It is foreseeable that a beta level 2 autonomous vehicle is going to hit things and kill people.

              I want to see stats on how many times the robot gave up and handed control back to a human to deal with it because it couldn't.
              Every single one of those situations where the robot required a human to take over was an accident that would have happened without a person there.

    • Do you live in a metro area? If so, can you tell me the circumstances of any fatal accident that happened within the last year? Because so far FSD has led to exactly one fatality (there were some autopilot deaths, but those were proven to be driver negligence). In fact FSD has led to at least one life being saved https://www.youtube.com/watch?... [youtube.com]

    • by mjwx ( 966435 )

      40,000 humans murdered by human-driven vehicles just in the USA (1 million worldwide) .. yet there is zero investigation or "concern". Why is your empathy or sense of "we ought to fix that" gone in those cases? How is it logical? Somebody fucking died and you motherfuckers don't give a shit.

      I think that has just show that the US are terrible drivers.

      The UK has, for the last 10 years, fewer than 1,800 road fatalities per year and that number has been falling.

      Even scaling that up to the population of the United States, it's still less than 10,000. You'll find it's a similar story in other developed nations where the road fatality rate is between 2 and 6 per 100,000 pop. The US is more akin to a developing nation like Mexico.

      Right now, driverless cars are stopping mid-intersection in the

  • Why is this even being reported? Self-driving cars will be perfected within hours whilst every human driver is responsible for hundreds of deaths EVERY SINGLE JOURNEY.

  • I'd like to see a comparison rather than random figures. 22 reports out of how much activity? How many miles?? What is the comparison to human safety statistics?

You are false data.

Working...