Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation

Cruise Recalls Robotaxies After Passenger Injured In Crash (cnn.com) 59

sdinfoserv writes: Yet another setback for automated driverless vehicle grail, Cruise recalls its robotaxis after a passenger was injured in a crash. The robotaxi was turning left at an intersection, assumed an oncoming vehicle would turn in front of it and stopped, resulting in the oncoming vehicle striking the robotaxi. A Cruise spokeswoman declined to say what the robotaxi could have done differently, and declined to release video of the crash. Nevertheless, Cruise said in a statement that it made the recall "in the interest of transparency to the public."

The company also said it's issued a software update to improve the ability to predict what other vehicles will do, including in conditions similar to the crash.
This discussion has been archived. No new comments can be posted.

Cruise Recalls Robotaxies After Passenger Injured In Crash

Comments Filter:
  • I'm pretty sure there are a lot of "I told ya so" comments coming.
  • Big deal (Score:5, Insightful)

    by backslashdot ( 95548 ) on Thursday September 01, 2022 @08:13PM (#62845153)

    It's worth it. We wouldn't have trains, airplanes, boats, or automobiles and probably even horses if the wussies who get scared after one accident had sway before modern times. Trains homicided a lot of people before it got safe. So did cars and airplanes. Heck even horse and buggies had lots of deaths. Before you say BS like "it's not you volunteering to be victim of self-driving accident" .. well I don't see YOU volunteering to be one of the 40,000 people in the USA (1 million worldwide) murdered every year by human error driving. We need transportation to be fully automated, that is the only way to get to 39,999 deaths per year .. and yeah even that would be worth it. So yeah either we have computers doing full self-driving or be subject to human fool-self driving.

    • It just seems like it would be so very easy to make a single intersection in a field somewhere and have people drive cars around it in any unexpected way possible until it never gets in an accident. With capital investment these days, these companies have so much money and so many resources to do such a thing rather than make people their guinea pigs. Also, getting self driving to be so good that it never gets in an accident even in stormy icy conditions where 2/3 more accidents actually occur seems so ve
      • It is impossible to have a car never get into an accident if you surround it with cars driving erratically. Even if you simplify it to an intersection, let's say there are 2 lanes turning left and you are on the right one of those, are you saying you will have the car never enter the intersection if there is another car traveling in the left one because that car might always decided to go straight and plow into you? Also the car can never enter an intersection if there is stopped cross traffic, because that
        • It is impossible to have a car never get into an accident if you surround it with cars driving erratically.

          So then automated driving is impossible. Especially when you add wind, ice, snow, and the machines that clear it.

      • single intersection is way to low you need more then one type and you also need to test on odd intersections

      • A zero accident policy is irrational and makes no sense. The standard should simply be that it is statistically beyond reproach that the computer is safer than a human driver.

      • They have test tracks. There's no way to simulate all the ways other road users behave. There are millions of people out there driving around and none of the are the same.

      • They do a lot of that in in-silico simulation because they can test millions of scenarios in various lighting on a computer .. limited only by GPU time I suppose.

      • I work in the field (not this company, but have been at more than one in my career).

        companies in this field spend surprisingly LITTLE on, well, everything.

        you'd think there was a lot of money, and yet, there really is not. mostly across the industry (bay area).

        sucks. really sucks. wish I could say more.

        • All the more reason why they should have standard, like "predict anything a human might do on the road" before putting these on the road.
    • by mjwx ( 966435 )

      It's worth it. We wouldn't have trains, airplanes, boats, or automobiles and probably even horses if the wussies who get scared after one accident had sway before modern times. Trains homicided a lot of people before it got safe. So did cars and airplanes. Heck even horse and buggies had lots of deaths. Before you say BS like "it's not you volunteering to be victim of self-driving accident" .. well I don't see YOU volunteering to be one of the 40,000 people in the USA (1 million worldwide) murdered every year by human error driving. We need transportation to be fully automated, that is the only way to get to 39,999 deaths per year .. and yeah even that would be worth it. So yeah either we have computers doing full self-driving or be subject to human fool-self driving.

      Exactly, we should get out there in those Chevrolet Corvairs, let the bodies pile up in the streets. Bad ideas always work out in the end.

      Build that asbestos housing, DDT is good for you, spray more of it whilst you enjoy the smooth taste of that cigarette. If we just keep doing it, it'll work right?

  • "Assumed" (Score:5, Insightful)

    by devslash0 ( 4203435 ) on Thursday September 01, 2022 @09:02PM (#62845231)

    If your algorithm assumes that all humans will obey by every road rule then you are doomed to fail from day 1. Roads in the real world are a wild west full of selfish drivers who routinely break any and every rule in the book, and often in completely unexpected and unpredictable ways, and there is f-all driving algorithms can do about it other than let everyone pass first before making their own move last, and that's at every junction.

    I'm also expecting to see the future where drivers start recognising robotaxis or otherwise self-driving cars and begin to abuse weaknesses in their safety algorithms to gain advantage on the road.

    • Accidents are unavoidable as long as we permit human drivers on the road. At some point, we will have to ban humans from driving vehicles on public roads (having fun on private ranches or tracks would be fine though). The human brain was not designed to sustain predicting and controlling motion at above 20 miles per hour. When you consider chariots and animal transportation are relatively recent innovations, it's possible we haven't evolved to control an external conveyance system at any speed. You can blam

      • >The human brain was not designed to sustain predicting and controlling motion at above 20 miles per hour.

        Seriously? The human brain is awesome at that. It's how we hunt and also how we fight.
      • First off, automotive (especially automated) should take more insight from other industries and change terminology: there are no "accidents" there are only "crashes." Sounds harsh, but no collision occurs by accident; there is always a failure chain with at least one intentional decision.

        Second, "zero accidents" is an unrealistic goal. If you're aiming for that, and if society has that as the bar required to allow autonomous vehicles, we will never succeed. The bar has to be what all the safety standards

        • When someone designs a toaster, it is expected not to start on fire and injure someone. Why are we creating a different standard for automated vehicles? They can injure people because the problem is hard? That's ridiculous, if you can't solve the problem then leave it for someone who can.
          • Humans are terrible at driving tho.

          • Some toasters do catch fire and harm people. It's not unheard of. By your ridiculous standard, we wouldn't be allowed toasters. We can keep making toasters safer, and iterating the design. We wouldn't have anything if the standard for allowing any product is that nobody can ever be harmed by it.

            • If they catch fire on their own, then they weren't made to North American safety specs. Someone shoved something into them they shouldn't have.
              • A toaster is a very simple device that faces a very limited subset of scenarios; vehicles have to face a variety of scenarios. The more complex something is, the more failure possibilities. I mean, look at all the vulnerabilities operating systems have. The goal of self-driving is to reduce the number of traffic fatalities, if you expect it to be guaranteed 100% safe you will be waiting an infinite amount of time .. meanwhile, while you are waiting for an idealistic guarantee of safety, 1 million traffic ac

                • Ok let's use another example. The Boeing 373 max 8. Why is it a problem that two crashed, if it is ok for complex devices to kill and injure people for the sake of progress? It seems that self driving cars are being held to a different standard, that's all.
                  • It's not a different standard - it's application of the same standard, "what is the accepted level of risk for this product in this market." Aircraft are not held to zero crashes by the way - they are held to a probability of severe or fatal injury being below some threshold. When the 787 Max had two crashes in the short period it had, it was evidence that threshold was not achieved.

                    • So why is there no threshold set for automated cars?
                    • How about if we hold self driving cars to the same death per passenger ratio as planes? That would be fine with me.
                    • 40,000 people are killed annually in the USA by human-controlled vehicles. If self driving can reduce that number even slightly, that is human lives saved and worth it. Tesla is already showing that it is 10x safer than human. If we waited for absolute perfection (which might never come, because nobody would invest in R&D for something that won't pay off for decades), thousands of people would have to die until such a standard is met .. .. if it can even be met without real-world deployment and iteratio

                    • There is no guarantee that will ever be met. When they determined Tesla was 10x safer, they compared to all vehicles down to every pinto still on the road. Well I should hope a 100K new vehicle is safer than a Pinto. Also, you can only use Tesla autopilot in the safest of driving conditions. Cars are currently very safe, considering how many people drive every day. If we were to put 276 million of these on the road, we would quickly see that they are decades away anyway.
    • If your algorithm assumes that all humans will obey by every road rule then you are doomed to fail from day 1.

      I don't see how that's the relevant issue here. Apparently the taxi stopped halfway through a left turn in the middle of the oncoming lane because it guessed/assumed wrongly about what the other car was about to do. But I don't see any indication that the oncoming car broke any rule.

      As a motorcyclist it worries me because this is the classic accident (with human drivers). A driver in the onco

      • Re:"Assumed" (Score:4, Informative)

        by jbengt ( 874751 ) on Friday September 02, 2022 @07:23AM (#62846021)

        But I don't see any indication that the oncoming car broke any rule.

        from TFA:

        Cruise has said that the oncoming vehicle drove in the right-turn lane and was traveling at "approximately 40 mph" in a 25-mph lane before it exited the lane and proceeded forward.

    • by AmiMoJo ( 196126 )

      Was the robotaxi at fault though? In the UK, if you hit a car that is stationary it's pretty much always your fault. The rule is that you should always drive at a speed that allows you to stop before hitting anything, and although that is somewhat impractical it does mean that if two cars collide and one managed to stop before the collision, the other guy takes the blame in most cases.

      The only real exception is if the driver of the stationary vehicle did something that lead directly to the crash. Common exa

    • by mjwx ( 966435 )

      If your algorithm assumes that all humans will obey by every road rule then you are doomed to fail from day 1. Roads in the real world are a wild west full of selfish drivers who routinely break any and every rule in the book, and often in completely unexpected and unpredictable ways, and there is f-all driving algorithms can do about it other than let everyone pass first before making their own move last, and that's at every junction.

      I'm also expecting to see the future where drivers start recognising robotaxis or otherwise self-driving cars and begin to abuse weaknesses in their safety algorithms to gain advantage on the road.

      This.

      Any system that relies on humans, not acting like humans isn't just doomed to failure, it'll be abused at every step along the way.

      Autonomous cars are nowhere near ready to be released to the general public. The stats for the Google car look good because it was a car and a professional driver who's job depends on being good at driving. Give your average driver an autonomous car and they'll switch off completely, then the collisions will start happening. I suspect it'll be the flying car of our ge

      • Re:"Assumed" (Score:4, Interesting)

        by TheGratefulNet ( 143330 ) on Friday September 02, 2022 @10:39AM (#62846605)

        Autonomous cars are nowhere near ready to be released to the general public

        cruise is one of the few that is quite far along. they are much farther along than I would say tesla is. just a WAG from someone who works in the local area and in this field.

        tesla takes even more chances with life and limb. cruise is much more conservative and not into bragging like elon does (he lives for that; he cannot be trusted, for many reasons)

        level 2 is pretty good in the industry, when you pay attention. it saves a lot of fatigue but you have to manage the car in a new way that takes getting used to.

        levels beyond 2 are going to take more years. and v2c is needed but there's no real push for it, sadly, and its just so needed to help the middle period where its a hybrid mix of people+machine. when its all people or all machine, it can be pretty stable, but mix the two and its chaos. v2c could help with that, if we went BIG in it. it will cost a lot. so we likely wont ;(

        I'd like to see some indicators on cars that have some autonomous element to them. alert other drivers about the hybrid mix. its ugly but do you have better ideas on how to co-exist during the middle period, the transition period? people need to give these cars more room and dont assume things about them. tailgate a tesla? you are a moron if you do. and if there's a light on the car or something to tell you, so much the better. people really need to be alerted so they change their behavior around these cars.

        its just a fact. these are little babies growing up and they need space around them until they get 'older'. its like that, think of it like that.

    • Forest fire, hurricane, tornado!
      Alexa, take me to safety!
      "I'm sorry, Safari is not support on Amazon Car interface."
      [You might be interested in Silk Soy milk, Click Here to begin navigation.]

      2 hours later:
      Steering wheel supplier stock up 3,000%
    • If your algorithm assumes that all humans will obey by every road rule then you are doomed to fail from day 1.

      It's a safe assumption to make and one the algorithms should be making. This is just step one obviously. Step two is ban human drivers. It's quite clear we're too stupid to operate dangerous machinery.

  • What are they hiding?
    but any ways they better save that video for any lawsuits / government needs.

  • just wait for one to stop on railroad tracks

  • Almost took a program manager job with them. Even during the interview process, it felt like they didn't have a good grasp on the elements of the problem. Feels like I dodged that bullet (or oncoming vehicle - pick your metaphor).

    • Almost took a program manager job with them. Even during the interview process, it felt like they didn't have a good grasp on the elements of the problem. Feels like I dodged that bullet (or oncoming vehicle - pick your metaphor).

      If that is so, you could have helped them by joining -- unless they came off as arrogant or haughty. Still, it sounds like you jump to conclusions from this one accident which makes me wonder if you're a prick too. Anyone can make a mistake -- that's a fact. Anyone who doesn't make a mistake once in a while is likely wasting their life by not challenging themselves to pursue the difficult problems. If NASA fired people every time a rocket exploded or there was an accident we would have never gotten to the m

    • by gweihir ( 88907 )

      If that is your conclusion here, they were lucky you did not take the job.

  • So will all the crack engineers out here on the net like to step up and explain how this shows that the use of LIDAR is so superior over Tesla's vision-only architecture?

    • Of-course not, they will still try to claim their approach is better than Tesla's. I think there will be something to see differently if they had 3 million of these cars on the road. Then they will create new excuses.
    • by Whibla ( 210729 )

      So will all the crack engineers out here on the net like to step up and explain how this shows that the use of LIDAR is so superior over Tesla's vision-only architecture?

      Well, I'm not going to do that, because that's not what this shows.

      I will point out the difference between this accident and those that Teslas have been involved in though: In this accident the robo-taxi was stationary and was hit by another car; in (most of?) the accidents involving Teslas the Tesla was the car that was moving.

      Without 'seeing' the accident location from the point of view of the cars it's difficult to apportion responsibility for the crash and, from a legal perspective, I don't know where f

      • by jbengt ( 874751 )

        One does wonder if this isn't an argument in favour of roundabouts rather than cross-junctions . . .

        You're not going to be able to build roundabouts for every existing intersection in a big, crowded city. For one thing, there's just not enough room.

        • by Whibla ( 210729 )

          You're not going to be able to build roundabouts for every existing intersection in a big, crowded city. For one thing, there's just not enough room.

          True enough, though mini-roundabouts are a thing. Surely inner city intersections would be controlled by traffic lights though, so you'd never be turning across 'live' oncoming traffic?

    • by gweihir ( 88907 )

      Simple: You are asking the wrong question. This was not about the car having a perception problem.

  • "in the interest of transparency" of course!

  • I'd rather encounter a robotaxi with a minor programming issue on the road than a Top Gear-stoked boy racer.
  • You don't need to predict what other vehicles WILL do, you need to predict what other idiots MIGHT do.

  • Yes, automated vehicles will also have crashes. Yes, some will be their fault. But overall they will injure and kill far fewer people than human drivers do. If this tech was not relatively new, people would not care. But as it is new, the usual panicky crowd goes through the roof.

    Requiring self-driving vehicles to be perfect is about as stupid as requiring vaccines to be 100% effective. Both merely say the person posing that requirement has no clue how reality works.

You know you've landed gear-up when it takes full power to taxi.

Working...