Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Transportation

Cruise Confirms Robotaxis Rely On Human Assistance Every Four To Five Miles (cnbc.com) 52

Lora Kolodny reports via CNBC: Cruise CEO and founder Kyle Vogt posted comments on Hacker News on Sunday responding to allegations that his company's robotaxis aren't really self-driving, but instead require frequent help from humans working in a remote operations center. First, Vogt confirmed that the General Motors-owned company does have a remote assistance team, in response to a discussion under the header, "GM's Cruise alleged to rely on human operators to achieve 'autonomous' driving." The CEO wrote, "Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn't a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations."

Cruise recently took the drastic move of grounding all of its driverless operations following a collision that injured a pedestrian in San Francisco on October 2. The collision and Cruise's disclosures around it led to state regulators stripping the company of its permits to operate driverless vehicles in California, unless there is a driver aboard. [...] A New York Times story followed last week diving into issues within Cruise that may have led to the safety issues, and setback for Cruise's reputation and business. The story included a stat that at Cruise, workers intervened to help the company's cars every 2.5 to five miles. Vogt explained on Hacker News that the stat was a reference to how frequently Cruise robotaxis initiate a remote assistance session.

He wrote, "Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode." CNBC asked Cruise to confirm and provide further details on Monday. The Cruise spokesperson wrote in an e-mail, that a "remote assistance" session is triggered roughly every four to five miles, not every 2.5 miles, in Cruise's driverless fleet. [...] CNBC also asked Cruise for information about typical response time for remote operations, and how remote assistance workers at Cruise are trained. "More than 98% of sessions are answered within 3 seconds," the spokesperson said. As far as the ratio of remote assistance advisors to driverless vehicles on the road, the Cruise spokesperson said, "During driverless operations there was roughly 1 remote assistant agent for every 15-20 driverless AVs."

This discussion has been archived. No new comments can be posted.

Cruise Confirms Robotaxis Rely On Human Assistance Every Four To Five Miles

Comments Filter:
  • by Tablizer ( 95088 ) on Tuesday November 07, 2023 @07:24PM (#63988629) Journal

    Your bot-car is probably be driven by the same person answering your banking question...at the same time!

    Multitask or no grain for you!

  • by marcle ( 1575627 ) on Tuesday November 07, 2023 @07:28PM (#63988651)

    Pay no attention to the man behind the curtain.

    • by timeOday ( 582209 ) on Tuesday November 07, 2023 @08:13PM (#63988743)
      Anybody who feels hoodwinked by this simply hasn't been paying attention. Here's an article from Sep 2022:

      https://www.thedrive.com/tech/... [thedrive.com]

      Cruise's CEO Kyle Vogt was asked if he could see a point where remote human oversight could be removed from the company's autonomous vehicle fleet. His surprising response: "Why?" Edge cases in autonomous driving often require human intervention to get around, and currently, Cruise uses a staff of remote human operators to help with those situations. Cruise previously has never mentioned that this will likely be a long-term solution, however, according to Reuters, Vogt's statements make it clear that people will still be in the loop for a long time to come.

      • by ToasterMonkey ( 467067 ) on Wednesday November 08, 2023 @12:19AM (#63989105) Homepage

        Anybody who feels hoodwinked by this simply hasn't been paying attention. Here's an article from Sep 2022:

        https://www.thedrive.com/tech/... [thedrive.com]

        Cruise's CEO Kyle Vogt was asked if he could see a point where remote human oversight could be removed from the company's autonomous vehicle fleet. His surprising response: "Why?" Edge cases in autonomous driving often require human intervention to get around, and currently, Cruise uses a staff of remote human operators to help with those situations. Cruise previously has never mentioned that this will likely be a long-term solution, however, according to Reuters, Vogt's statements make it clear that people will still be in the loop for a long time to come.

        Hoodwinked? They're being honest about the situation, this makes me feel a little better about them actually.

        The difference between today's reality and full autonomous unsupervised driving is stupefyingly complex. Anyone selling that to you is full of shit.

    • by dgatwood ( 11270 )

      I just wonder how long an can operator do that before it becomes like Windows messages, where the operator gets used to clicking "Allow" and stops paying attention. I'm imagining "Run over pedestrian now? [Allow] [Deny]" messages here. :-D

  • by RightwingNutjob ( 1302813 ) on Tuesday November 07, 2023 @07:51PM (#63988699)

    My mind turns immediately to the 2005 DARPA Grand Challenge where Red Whitacre's Carnegie Mellon team and Sebastian Thrun's winning Stanford Team famously had an army of undergrads inputting GPS waypoints for the cars to follow. And in Stanford's case, the one camera staring our the front window was there just to avoid conspicuous obstacles.

    • by gweihir ( 88907 )

      The time-honored "fake it till you make it" approach. Nice for getting grant money, not so nice for delivering actually working and good solutions for hard problems.

      • The time-honored "fake it till you make it" approach. Nice for getting grant money, not so nice for delivering actually working and good solutions for hard problems.

        Well, except that this is the current solution to military "antonymous" stuff like drones and it actually makes sense in many situations. DARPA and co. can always run another competition with an additional requirement for long breaks in communication links.

        • by gweihir ( 88907 )

          Sure it is the current "solution" for "autonomous" even if it is in no way autonomous by the actual definition of the term. And hence it is not what the whole implicit promise is. Essentially, "fill their heads with fantasies, deliver something that somehow partially solves the problem and take their money". SOP for military "innovation".

          • Well, in Ukraine, both the Ukrainians and the Russians seem to be doing hybrid systems which have manual guidance for most of the time and then terminal AI based guidance for when they lose signal connections close to the target so that electronic countermeasures like drone guns are less effective, so I think I see this as more of a stepping stone.

            Whilst, right now, Cruise sees 4% of time under human control as not a problem, I'm guessing that when competition comes into the market and pushes down prices, s

  • by 93 Escort Wagon ( 326346 ) on Tuesday November 07, 2023 @07:54PM (#63988709)

    I don't have a problem with this approach - as long as they don't misrepresent things. I'd rather they do this versus deciding they have to be 100% autonomous even if it's not fully baked.

  • smugmode

    https://m.slashdot.org/thread/... [slashdot.org] /smugmode

  • Ah (Score:5, Funny)

    by cascadingstylesheet ( 140919 ) on Tuesday November 07, 2023 @08:10PM (#63988733) Journal
    "My name is Mechanical Turk; I'll be your driver today ..."
  • What happens if the remote link is slow or down? Is that why Cruise keeps blocking streets for many minutes? Is that why Cruise dragged a pedestrian 20 feet after hitting them?

    • by Anonymous Coward
      Nah. The link was fast that's why it only dragged the pedestrian 20 feet.

      If it was slow it would have dragged the pedestrian 200 feet or more. ;)
    • by gweihir ( 88907 )

      The vehicle will obviously trigger an explosive charge to hide the shame of the operator! What else would they do?

  • by topham ( 32406 ) on Tuesday November 07, 2023 @08:20PM (#63988753) Homepage

    And this is why the requirement for "5G" for "self driving".

    Fraud, all the way down.

  • I keep reading articles in investment publications how Tesla is falling behind the competitors in autonomous vehicle software. Because Waymo and Cruise are "deployed" and Tesla hasn't released their product yet.

    Seems like the "leaders" aren't quite leading as much as they thought.

  • Cruise CEO here. Some relevant context follows.

    Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn't a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.

    The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode.

    In terms of staffing, we are intentionally over staffed given our small fleet size in order to handle localized bursts of RA demand. With a larger fleet we expect to handle bursts with a smaller ratio of RA operators to AVs. Lastly, I believe the staffing numbers quoted by nyt include several other functions involved in operating fleets of AVs beyond remote assistance (people who clean, charge, maintain, etc.) which are also something that improve significantly with scale and over time.

    https://news.ycombinator.com/i... [ycombinator.com]

    • by ChoGGi ( 522069 )

      Oof, I didn't have much sleep last night, and I figured this was /. being slow as usual.

      RTFS? eh

  • I don't see why we keep focusing on GM failures in cars, electronics, computers and such.

    It's GM, they have never made anything that functions particularly well.

    GM's best possible marketing slogan is "Buy GM, we work like a Lada, but we're still less shitty than Ford"
  • so now an car needs an low lag high bandwidth data link at all times?
    so maybe cell + sat link and that data plan better have uncapped / unlimited data rate with full NA roaming

  • The only actually autonomous self-driving tech actually working is from Daimler-Benz and it will only work in quite limited conditions. The rest is faking it or exposing people to unacceptable risks or not not actually really working (see roads clogged by "autonomous" vehicles.

    Do not get me wrong, autonomous self-driving vehicles are the future. But it will still take quite a while until they are ready. Technology like this is _slow_ do get to maturity. These questions have been worked on intensively for mo

    • by ledow ( 319597 )

      The problem is that while autonomous self-driving vehicles are the future, that they will be AI-controlled is a dumb assertion for anyone.

      If you want autonomous vehicles, we could have done it 40 years ago. Paint a line down the road, put encoding into the line, put signal emitters of some kind on junctions and barriers, etc.

      Change the ROADS (ever so slightly and a drop in the ocean compared to their maintenance costs), and then the job becomes infinitely easier for a computer to handle.

      People trying to re

    • AFAICS Mercedes's "level 3" is really just follow the duckling in heavy traffic. When there is a lead car with a human driver, an expert system can be mostly trusted when you stay in lane because it leaves decision making to the human driven car (the control loops and detection systems won't be expert systems of course, just the system deciding to stay in level 3 and follow the preceding duckling).

      If this type of level 3 becomes common, a single human driver being an idiot can turn follow the duckling into

      • by gweihir ( 88907 )

        Sure. It still is the only level 3 system in existence that can master some normal traffic situations on the level of a human driver. _All_ others need constant monitoring by a human or have very low performance (slow speed) or both.

  • Driverless cars solve nothing. They live up to DARPA's challenge, to kill people.
  • That's the biggest issue with AI today - yes, it does things, but it has very little autonomy. Not just in driving cars, but also in all the other applications.
  • Lots of places where stop-signs or one-way signs on the wrong side of the road (where they don't count) just to save installing a pole, because there was already a pole on the left side that they could use.

    If a driver-less car would drive through such a stop-sign and kill people, it would not be its fault.

  • Problem solved. Science.
  • by sudonim2 ( 2073156 ) on Wednesday November 08, 2023 @07:18AM (#63989501)

    The fact that interventions last less than 3 seconds on average aren't a testament to the software being so good. They're a testament to how easy the problems were for a human to solve and how bad the software is at doing something humans do with ease. At every turn, these supposed "artificial intelligences" turn out to be backed up by actual, direct human labor that an elaborate technological mechanism has been built around to obscure. Every intelligent agent ever created has turned out to be a mechanical Turk. Like Charlie Brown, technophiles keep swinging at the ball every time an anti-social little brunette tees one up for them.

    • I'm not sure I even believe them that interventions last less than 3 seconds, unless a human is monitoring the road view continuously. Just think about how long it would take you to direct your attention to something that pops up on your screen, figure out what exactly the thing wants from you, then make a decision, and finally communicate it to the robot. This cannot possibly take 3 seconds.

      • That's the thing; it absolutely can! There are various reflex and twitch games that show that humans can react to situations quite quickly if they're simple and predictable enough. The reason it takes 3 seconds for most interventions is because the situations are relatively simple. Bad lane markers, construction, unexpected object in the road; these all would take a human fractions of a second to analyze and respond to. If you drove today, you did it dozens of times a minute.

  • by jbmartin6 ( 1232050 ) on Wednesday November 08, 2023 @08:41AM (#63989627)
    I'm not clear why anyone thinks this could be otherwise. This is what IT systems are like because it is extremely hard and expensive to produce truly reliable systems. It is much cheaper to stop trying in the mid-90s and outsource the rest to a set of human backstops. I look around my company offices and see much the same, an ever proliferating team of specialists babysitting the automation which breaks 2-4% of the time. For instance, today I am troubleshooting agents which are sending malformed event logs to our collectors, rather than doing my supposedly real job of security intelligence analysis and detection engineering.
  • Mechanical Turk, GM version.

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...