Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Transportation

Cruise Disputes Report Its Robotaxi Blocked an Ambulance Carrying Patient Who Later Died (sfchronicle.com) 75

"Two stalled driverless taxis blocked an ambulance carrying a critically injured patient," writes the San Francisco Chronicle, citing a paywalled report from Forbes. The delay "contributed to 'poor patient outcome' — the person died 20 to 30 minutes after reaching the hospital, according to a report by San Francisco firefighters that the taxi company disputes."

The report was obtained by Forbes, which recently published a story detailing accounts by San Francisco firefighters who say driverless taxis have repeatedly interfered with their emergency response. However, Forbes also reported that Cruise provided a video that disputed SFFD's account of the August 14 incident. The video, Forbes reported, shows that one Cruise car quickly left the scene while the other remained stalled at the intersection with an open lane to its right, which traffic was passing through. Forbes said it was not clear from the video if the ambulance could have navigated into the open lane.

Hannah Lindow, a Cruise spokesperson, told the Chronicle that the Cruise vehicle that stopped did so to yield to first responders directing traffic. "Throughout the entire duration the (autonomous vehicle) is stopped, traffic remains unblocked and flowing to the right of the AV. The ambulance behind the AV had a clear path to pass the AV as other vehicles, including another ambulance, proceeded to do," Lindow said in an email. "As soon as the victim was loaded into the ambulance, the ambulance left the scene immediately and was never impeded from doing so by the AV."

This discussion has been archived. No new comments can be posted.

Cruise Disputes Report Its Robotaxi Blocked an Ambulance Carrying Patient Who Later Died

Comments Filter:
  • Media generates controversy - but of course, that's what they do.
    Company denies everything - but of course, that's what they do.

    • There is a video?

      • by phantomfive ( 622387 ) on Sunday September 03, 2023 @02:00PM (#63819834) Journal
        We don't have the video, but according to the Forbes, "it was not clear from the video if the ambulance could have navigated into the open lane," which means that the ambulance didn't pass. Presumably it would have passed if it could have.

        Regardless, these cars are not ready to be on the streets without a safety driver. Wait until they are more fully tested. There's no reason to remove the safety driver except for cost/propaganda purposes.
        • by 93 Escort Wagon ( 326346 ) on Sunday September 03, 2023 @02:57PM (#63820000)

          There's no reason to remove the safety driver except for cost/propaganda purposes.

          You are glossing over the very, very important reason these companies want to get rid of the safety driver ASAP.

          Like many people, you are erroneously assuming the main interest of these companies' CEOs is solving a problem - autonomous vehicle control, in this case. But, with the vast majority of these new tech companies, the CEO's primary reason for creating the company was simply to create a saleable company. They don't particularly care if the tech pans out in the long run... they just need some richer company to think it can so they'll throw millions of dollars at the CEO. In this instance, keeping safety drivers in the vehicles doesn't just cost money... more importantly, it affects the perception of the tech's maturity, which makes the company less attractive to any suitors who might throw money the CEO's way.

          You have to realize the CEO isn't getting any younger - he wants those millions NOW, not one or two decades from now! He wants that aspirational lifestyle he sees other young tech CEOs achieving!

          • But, with the vast majority of these new tech companies, the CEO's primary reason for creating the company was simply to create a saleable company. They don't particularly care if the tech pans out in the long run... they just need some richer company to think it can so they'll throw millions of dollars at the CEO.

            Even if it isn't perfect, they obviously have a proven concept here and fundamentally working technology. The company is already ripe for a buyout if that's all that the CEO wants, so I tend to think that he would have done so by now. If anything it seems more like he's wanting to mature it to the point of, at the very least, an IPO, if not to keep it private in the long-term.

          • There's no reason to remove the safety driver except for cost/propaganda purposes.

            ...the CEO's primary reason for creating the company was simply to create a saleable company. They don't particularly care if the tech pans out in the long run... they just need some richer company to think it can so they'll throw millions of dollars at the CEO.

            Your argument is somewhat undercut by the fact that Cruise is a wholly owned subsidiary of General Motors, so that buyout's already happened. It's still possible the CEO's bonuses are tied to performance targets relating to driverless success, so it could still be true to a lesser degree.

        • Re: (Score:3, Insightful)

          by Tony Isaac ( 1301187 )

          The ambulance behind the AV had a clear path to pass the AV as other vehicles, including another ambulance, proceeded to do,

          This seems pretty clear to me. If another ambulance could go around, then there's presumably no reason the second one couldn't go around.

          And...stalled vehicles are not unique to self-driving cars. Regular vehicles also get stuck and sometimes block traffic.

          • Re: (Score:1, Troll)

            by phantomfive ( 622387 )

            If another ambulance could go around, then there's presumably no reason the second one couldn't go around.

            Really? You can't think of any reason?

            • by ls671 ( 1122017 )

              Any emergency vehicle should be powerful enough to simply push those robotaxis out of the way! I have seen it done a few times where even police cars would push stalled vehicle out of the way so, an ambulance or a fire trunk should have no problems doing it.

            • Re: Who to believe? (Score:4, Informative)

              by Tony Isaac ( 1301187 ) on Sunday September 03, 2023 @04:34PM (#63820228) Homepage

              Nope! Ambulances tend to be of similar, and presumably each one has a working motor and steering wheel. If one ambulance can go around, the next one should be able to follow, unless the driver is incompetent. And that wouldn't be the robotaxi's fault.

            • Re: (Score:1, Funny)

              by Anonymous Coward

              If another ambulance could go around, then there's presumably no reason the second one couldn't go around.

              Really? You can't think of any reason?

              I mean, the driver could have been a fucking idiot, sure.

        • It surprises me that there isnâ(TM)t yet some organisation going into making sure that emergency services can hijack the cars. Give the emergency services some way of authenticating themselves with the companies, and generating a QR code that they show to the car to tell it âoez what youâ(TM)re doing and let this first responder move you out of the wayâ.

        • Emergency vehicles need cowcatchers, to push away cars that do not let them pass without incurring a lot of damage to the emergency vehicle.

          • Emergency vehicles need cowcatchers, to push away cars that do not let them pass without incurring a lot of damage to the emergency vehicle.

            You could reasonably do that on a fire truck, because they are based on a big sturdy full frame heavy truck. But an Ambulance is now typically a Ford Transit or Mercedes Sprinter, and they are unibody with an engine subframe. Even if you have one of the diminishing number of ambulances still based on a F-350 or similar, which are out of favor because of the massively higher fuel consumption and the pathetic debacle of Navistar 6.0 and 6.4 liter engines that scared most operators away, you might hesitate to

            • You wouldn't ram the stationary car at speed, but instead push it slowly. The torque and traction of an ambulance or a fire truck is probably greater than the braking power and traction of the stuck autonomous car, so it would slide out of the way.

    • Re:Who to believe? (Score:5, Insightful)

      by hey! ( 33014 ) on Sunday September 03, 2023 @02:04PM (#63819850) Homepage Journal

      Well, in this case it's the San Franciso Fire Department that claims the vehicles delayed the ambulance. Their version of the story is that a police car at the scene had to be moved because the autonomous robotaxi was blocking traffic.

      Now technically you *could* say that the police car was blocking traffic too; essentially the company is implying the police car was just as responsible for the delay. But the police car was there *responding* to the incident. You often get a cop car, a fire truck and and ambulance in response to 9-1-1 because it's better to send everything and sort out what you need at the site than to try to guess what you *might* need.

      We've all seen that, and we've all done what we're supposed to do: move carefully around the emergency response vehicles so traffic can move and the first responders can do their thing. The robotaxi did not, so the first responders had to shuffle their vehicles so the ambulance could leave.

  • Negotiating tactic: Put companies running autonomous vehicles on notice that if they block an emergency vehicle, they mind said autonomous vehicle pushed out of the way and, once a tow truck arrives, towed, with the owner being responsible for all costs, including any damage to the pushing-vehicle or third parties.

    Just the threat of this should bring the autonomous car companies to the negotiating table.

    As for a civilized, negotiated solution, the companies running these cars should give first responder a

  • by Indy1 ( 99447 ) on Sunday September 03, 2023 @02:00PM (#63819836)

    While not defending Cruise, the report states that the patient had a GCS score of 3 (the lowest you can get), agonal respirations, no peripheral pulse, and massive bleeding from the leg (Police had to tourniquet it),

    The GCS score indicates this person likely had massive brain injuries, and the other information is indicative of massive hemorrhage and hypotensive shock.

    This patient had virtually no chance of survival given the injuries. That said, I'm sure its only a matter of time before one of these automated vehicles DOES directly get someone killed.

    • We're all going to die regardless. The question is, do you take that fact and elect to be laissez faire with major injuries or do you prefer to take reasonable steps to try to defer the inevitable if given the choice?

    • by quonset ( 4839537 ) on Sunday September 03, 2023 @02:40PM (#63819958)

      While not defending Cruise, the report states that the patient had a GCS score of 3 (the lowest you can get), agonal respirations, no peripheral pulse, and massive bleeding from the leg (Police had to tourniquet it),

      For those who are curious, here is an explanation of what GCS [clevelandclinic.org] (and now GCS-P) measures and the scoring involved. And, in reference to what the OP said about a score of 3:

      The highest possible GCS score is 15, and the lowest is 3. A score of 15 means you’re fully awake, responsive and have no problems with thinking ability or memory. Generally, having a score of 8 or fewer means you’re in a coma. The lower the score, the deeper the coma is.

    • While not defending Cruise, the report states that the patient had a GCS score of 3 (the lowest you can get), agonal respirations, no peripheral pulse, and massive bleeding from the leg (Police had to tourniquet it),

      The GCS score indicates this person likely had massive brain injuries, and the other information is indicative of massive hemorrhage and hypotensive shock.

      This patient had virtually no chance of survival given the injuries. That said, I'm sure its only a matter of time before one of these automated vehicles DOES directly get someone killed.

      Agreed. My impression is that these vehicles have been impeding emergency vehicles for a while and first responders are very fed up, so now that they have an actual fatality (even an inevitable one) tied to one they're not going to let it slide.

    • That said, I'm sure its only a matter of time before one of these automated vehicles DOES directly get someone killed.

      Worth noting that the injured person was ultimately killed by ... a non-autonomous vehicle.

    • by AmiMoJo ( 196126 )

      That said, I'm sure its only a matter of time before one of these automated vehicles DOES directly get someone killed.

      Which would be tolerable if they accepted even partial responsibility and paid out. The worry is that they all adopt Tesla's attitude of it always being someone else's fault.

      Private individuals either have insurance or usually can't afford an elaborate defence in case of getting sued. These big companies are lawyered up before the crash even happens.

  • Just set the "intersection blocked" bit to false and it's all good.

    One wonders what would have happened if it had been the aerospace companies that got into driverless cars instead of the software companies. Maybe there would have been more discipline from the crowd that learned early on how to write flight software before Google was a glimmer in Larry and Sergei's eye, or maybe it would have been the same result given that Boeing and Airbus seem to be getting sloppy and inheriting the worst behaviors out o

    • One wonders what would have happened if it had been the aerospace companies that got into driverless cars instead of the software companies.

      They'd build the car in a Lotus body, then at production port the software into a Miata body, then tell all the passengers to "RTFM" if anything goes wrong. Nah, Boeing would never do something like that.

    • Move Fast and Break Things is fine, it just has to be done correctly. SpaceX employs it well -- run several test flights, find what works and what doesn't, exploding rockets is merely cracking eggs to make omelets. Don't launch a rocket design that hasn't had at least X successful test flights with a payload that you care Y much about. Take Falcon Heavy's first launch -- what was the payload?

      • That's not move fast and break things, that's destructive testing. There's a difference.

        Let me make an analogy: destructive testing is manning your submarine with instrumented test dummies and taking submarine to crush depth to see what happens; move fast and break things is manning your submarine with paying customers and taking it to crush depth to see what happens.

        Which of those are closer in spirit to the way self-driving cars are being foisted on San Francisco?

        • Why not compare apples to apples instead?

          SpaceX is advancing at a much faster rate then Boeing, and blowing up more rockets in the process. This isn't moving fast and breaking things?

          • The distinction I make is whether you learn something useful and necessary from a failure before or after such a failure would call your judgement and your competence into question.

            Boeing discovered a software fault (a pretty amateurish one if I understood correctly) in-flight. For a paying customer.

            SpaceX blew up most (not all, but most) of their rockets after their paying customers were on their way to orbit. And learned something useful out of each one.

            Waymo and Cruise are apparently discovering *just no

    • This is because the people that are appointed (and possibly those people too) to run these companies are business people only and not first or also a fan or follower or practicer or lover of the thing the company does for money.
      So all things are grotesquely molded by the environmental factor that it must make money, moreso year over year. This a long way from, "I like communications. I'm going to make phones, because I like helping people communicate." It's more like, "How can this phone-like company makes
  • by quantaman ( 517394 ) on Sunday September 03, 2023 @02:17PM (#63819894)

    The linked accident report includes the following line:

    This delay, no matter how minimal, contributed to a poor pt. outcome

    It's hard to tell exactly what happened but from the sounds of it the two Cruise vehicles were blocking the two lanes on the right while a police car was parked in the left lane blocking the ambulance from leaving.

    They couldn't move the Cruise taxis so the police car had to move instead. It also sounds like one of the Cruise vehicles wasn't stopped long, but it's unclear if it left before or after the ambulance.

    Either way, the delay wasn't very long, but it was a delay and a distraction the emergency personnel who were already pissed off at the Cruise Taxis aren't feeling very sympathetic.

    This does strike me as a big design flaw in the Cruise Taxis. Given their default behaviour for a lot of scenarios is "stop where you are" they really need better fail-safes when emergency vehicles are around.

    Some combination of giving emergency personnel some kind of special override to force the vehicle into neutral (so that can just push it out of the way) and a remote Cruise operator who is pulled in the moment sirens / flashing lights are too close and has the ability to do a manual override.

    • But video's dispute what happened, and the good thing is, these situations make these driverless cars better every new update, handling situations like this better as human drivers. People seem to forget that human drivers are holding up first responders way more, or create much dangerous situations then these driverless cars.
      • But video's dispute what happened

        Cruise claimed the video disputed what happened, but I didn't see the video linked anywhere, did you?

        Cruise is hardly a neutral 3rd party.

        and the good thing is, these situations make these driverless cars better every new update, handling situations like this better as human drivers.

        In theory yes, but they're not there yet and we don't know how long it is until they do get there. 5 years? 10? 50?

        People seem to forget that human drivers are holding up first responders way more, or create much dangerous situations then these driverless cars.

        In total? Of course. But on a per vehicle basis? A handful of Cruise Taxis are causing regular incidents for San Francisco Emergency responders. That is definitely not a technology ready to deploy at scale.

        • Sorry, but it's not clear in theory if increasing the training set will make the car AIs better drivers. Look up the concepts "overfitting" and "model drift".

          TLDR: when you give an AI too much data, it becomes overconfident on the data it has seen and makes more mistakes. There's a point where you need to stop training and live with the errors, otherwise the number of errors grows and grows and grows. The number of errors can never go to zero in a real problem like driving.

          • Sorry, but it's not clear in theory if increasing the training set will make the car AIs better drivers. Look up the concepts "overfitting" and "model drift".

            TLDR: when you give an AI too much data, it becomes overconfident on the data it has seen and makes more mistakes. There's a point where you need to stop training and live with the errors, otherwise the number of errors grows and grows and grows. The number of errors can never go to zero in a real problem like driving.

            Overfitting it more to do with the test train balance, getting more data is actually a way to fix it.

            Model drift is harder to fight, though I don't see why driving behaviours would markedly change over the course of the driving set (except that humans would start reacting differently to the AVs).

            I don't see any reason why the tech (it's not just models) wouldn't continuously improve, my concern is that the rate of improvement will decrease and start to plateau at a level well below that required for safe dr

            • In the real world ML doesn't work the way it works in lectures. Training data is not unbounded, certainly not for real systems involving feedback like a city road network. So continuous improvement is impossible already just for that reason, and if you stand still with a fixed dataset there will be continous degradation.

              Imagine you could get 100 years worth of driving data in SF, would you want it? No, because the cars that were on the road in 1920 and the behaviour of pedestrians and the horses behaved v

              • In the real world ML doesn't work the way it works in lectures.
                Training data is not unbounded, certainly not for real systems involving feedback like a city road network. So continuous improvement is impossible already just for that reason, and if you stand still with a fixed dataset there will be continous degradation.

                Imagine you could get 100 years worth of driving data in SF, would you want it? No, because the cars that were on the road in 1920 and the behaviour of pedestrians and the horses behaved very differently than today. If you train an AI on that, you'll actually increase your driving error rate relative to 2023. So you see, the usable data is generally time limited and decays away.

                True, but unless they stop collecting data or reduce the size of the fleet the amount of current data will increase over time.

                You never have more than about 10-15 years of useful data, and that's just basic automotive technology and legal evolution (eg when did SUVs start filling the roads?)

                I'm not sure I agree with that.

                Certainly you need to figure out how to kill data if road rules have changed. But do you really think a driver who time travelled from NYC 50 years ago would have that much trouble driving a big city today?

                Also consider that the other agents (not just humans) in the system react to your car, then that shrinks the usable data even more. Because once your AVs are deployed and become part of the dataset, they pollute the training data for themselves and for the other AVs too. Now you have to make assumptions, and posit counterfactuals, just to simulate real world situations that don't really exist. Naturally, this will tend to increase the driving error rate in the real world.

                I'm not sure about other drivers (they're likely to just treat them as other cars), but how pedestrians or even cyclists react to the AVs is a defini

    • Some combination of giving emergency personnel some kind of special override to force the vehicle into neutral (so that can just push it out of the way) and a remote Cruise operator who is pulled in the moment sirens / flashing lights are too close and has the ability to do a manual override.

      There should be a pin pad on the vehicle, and a code that lets the cops open the door and drive the vehicle in an emergency. Put a camera by the pin pad and take some pictures when the pin pad is used to provide a trail of evidence. And there should be a hefty fee charged to the owner of the vehicle for the privilege of having a cop operate the vehicle for them. This is a TRIAL, it is EXPECTED that there will be failures, to not have a way to mitigate them is wholly unacceptable and I am disappointed in SF

    • emergency vehicles are built on one ton and heavier truck chassis. Police vehicles usually have push-grates up front, and many are SUV based.

      *IF* actually getting blocked, the reasonable response would seem to be to push the offending little vehicle out of the way.

  • There were a number of news stories citing the emergency workers identifying the cars blocked the ambulance passage, and butted against cop car further blocking things.
    The infamous corrupt Willie Brown pressured the CPUC to approve the car companies running in SF. Typical SF corruption.
    So now we have a case where a robototaxi led to someone dying. How about putting in cop overrides on all robotaxis? No f'ing blue screen of death allowed.

  • ... robot lawsuit money, baybee!
  • by SuperDre ( 982372 ) on Sunday September 03, 2023 @07:07PM (#63820544) Homepage
    And how many human drivers block first responders every day? Yeah, you can bitch about the driverless cars, but if you really start counting, human drivers impede first responders way more then the driverless cars do.
    • Human drivers don't have the luxury of analyzing any flaw they may have beforehand and correcting it like ev companies.
      • But that's what makes these driveless cars so much better, at least they learn from mistakes, but not as a single person, but ALL cars in the fleet.
        We're still in the early days of driverless cars and they are already in a lot of circumstances much safer as any human driver.

        • Humans learn from mistakes too, and at least they don't make the same ones and in fact a vast majority are better than self driving. Nor does that absolve the company that has every opportunity to predict where problems might happen and solve them before they are a problem. If we could scan humans brains for any mistake they might make, we should and would. These companies absolutely can do that so why don't they?
        • Also let me add... humans learn from close calls and from instruction. Machines need to make the mistake before they seem to realize it is screwing up.
          • It seems like you haven't met many humans, as a lot of people don't learn from close calls or instruction, hell, machines actually learn much better from instructions as humans. But I do agree these companies should research daily accidents and problems on the road (all sources, not only what they see with their vehicles) so they can create a better system. Hell, IMHO there should be a universal database with situations where all these system learn from and add to.
            • They aren't even tapping into a fraction of "situations" humans deal with. They aren't even trying to deal with bad weather yet (blowing rain/snow and ice).
    • Irrelevant. Humans belong on the roads. These for profit corporations have yet to demonstrate the value of using unpaid, involuntary, real humans as beta testers for their beta level robots.

    • Re: (Score:2, Insightful)

      by thegarbz ( 1787294 )

      Better still, the person who died was ... involved in a car accident with human drivers. Imagine if every car was a cruise, the ambulance wouldn't be picking up an injured party in the first place.

    • by mjwx ( 966435 )

      And how many human drivers block first responders every day? Yeah, you can bitch about the driverless cars, but if you really start counting, human drivers impede first responders way more then the driverless cars do.

      One of the brilliant things about the UK is, that when an emergency vehicle puts on it's lights everyone knows to pull off to the side of the road and let it through. A far cry from what I've seen in Australia and the US.

      However the difference between the two is, 1. If you, as a meat-based driver, block an emergency vehicle you can be charged, have your license revoked and even imprisoned. The company in question are trying to argue they are not responsible, and; 2. A police officer can commandeer your v

      • You really have no idea what you are talking about. Maybe you should read up on these systems and detection of first responder vehicles. They can detect them, they can avoid them. And I see a lot of times 'meat'drivers having problems with emergency vehicles.
  • It exists to sell expensive vehicles. There is zero altruism involved.

  • We don't have autonomous vehicles in my country (for various reasons that would not always be a smart idea at the moment). So I don't know a lot about them. It seems at a stage such vehicles had a "safety driver", but these are now pulled from such vehicles. I also read that the involved vehicles were operating as taxis, i.e. belonging to a firm and not some private individual.

    We also have a lot of warfare conducted these day via remotely piloted drones. Why can't these robotaxis in an emergency switched o

    • Why can't these robotaxis in an emergency switched over to manual remote control?

      At least some of them can, and maybe all of them. But recently there was a failure of some of these "autonomous" vehicles in SF because of a cellular outage...

I think there's a world market for about five computers. -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943

Working...