Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses Transportation

Cruise's CEO Resigns (techcrunch.com) 48

An anonymous reader shared this report from TechCrunch: Kyle Vogt, the serial entrepreneur who co-founded and led Cruise from a startup in a garage through its acquisition and ownership by General Motors, has resigned, according to an email sent to employees Sunday evening...

The executive shakeup comes a less than a month after the California Department of Motor Vehicles suspended Cruise's permits to operate self-driving vehicles on public roads after an October 2 incident that saw a pedestrian — who had been initially hit by a human-driven car and landed in the path of a Cruise robotaxi — run over and dragged 20 feet by the AV. A video, which TechCrunch also viewed, showed the robotaxi braking aggressively and coming to a stop over the woman. The DMV's order of suspension stated that Cruise withheld about seven seconds of video footage, which showed the robotaxi then attempting to pull over and subsequently dragging the woman 20 feet... [M]ore layoffs are expected at the company that employs about 4,000 full-time employees.

TechCrunch notes that Vogt previously co-founded Justin.tv Socialcam, Twitch, and shares this quote from an email that Vogt sent to all employees Sunday evening. "The startup I launched in my garage has given over 250,000 driverless rides across several cities, with each ride inspiring people with a small taste of the future...

"The status quo on our roads sucks, but together we've proven there is something far better around the corner."
This discussion has been archived. No new comments can be posted.

Cruise's CEO Resigns

Comments Filter:
  • could have prevented this mishap.
    Who tried to save a few pennies by omitting such an essential camera?

    • by quantaman ( 517394 ) on Sunday November 19, 2023 @11:53PM (#64017467)

      could have prevented this mishap.
      Who tried to save a few pennies by omitting such an essential camera?

      The problem isn't the cameras, it's the AI.

      I saw someone showing a video showing a similar issue with Tesla.

      They were doing tests with a cutout of a small child and having it jump out into the road to see if the car would stop, and it didn't do great. But in the final test with the latest "FSD" it came to an almost complete stop, only lightly bumping the "child." This was just enough to knock over the cutout, and when the cutout fell to the ground the Tesla lost sight of it, forgot it was there, and proceeded to drive over it.

      The trouble is these systems are getting fairly good at seeing objects and tracking them, but when something vanishes they don't really have the judgment to understand where they went, or the nuanced memory required to say oh, that person who just appeared in front of me, they didn't go anywhere and are probably lying in front of my car.

      Maybe you can nab that situation with enough special cases, but what about the small child who just vanished behind a parked car? Or the deer running towards the road that dropped out of sight because it ran down into the ditch?

      Self-driving is still a very difficult problem.

      • The 'AI' can only do what it was programmed to do. Contrary to the common AI abbreviation there is no intelligence involved here, it is all algorithms. This is on the developers. If a living object was detected and any sort of defensive maneuver was initiated, the fail safe should be to stop and not move until the vehicle operator/nanny clicks ok on a bazillion proceed prompts.
        • by AuMatar ( 183847 )

          It's not all algorithms. It's all data. AI is really just studying lots of data looking for patterns and hoping that if it find a pattern in the "good" data, that following that pattern will get a "good" result. What this means is that they don't have enough data about collisions with objects for their algorithm to detect it should stay stopped, or that the data they do that says "nothing detected hit the gas" is strong enough to override it.

          • Getting into semantics here, but the data is manipulated with an algorithm. If they are indeed using that data to make life endangering decisions then they deserve to be shut down from public roads. You have to bake in fail safes because 'reading the data' is unpredictable, even if you test it and it works once - as that data changes the outcome may change; for safety you can't depend on data behavior that may change it has to be hard coded.
            • by AuMatar ( 183847 )

              It's not semantics. There are no algorithms involved really. Nobody is adding logic like "if(camera_sees_object()) then break_hard()" The entire point of AI and machine learning is NOT to write those kinds of algorithms, because the complexity is such that it can't be done. Instead, its all pattern matching. That's what machine learning is.

        • The 'AI' can only do what it was programmed to do. Contrary to the common AI abbreviation there is no intelligence involved here, it is all algorithms. This is on the developers. If a living object was detected and any sort of defensive maneuver was initiated, the fail safe should be to stop and not move until the vehicle operator/nanny clicks ok on a bazillion proceed prompts.

          And then you block the street which holds up emergency vehicles.

          Or you're in an intersection and you're now getting t-boned.

          Even a fail-safe is tough.

      • Re: (Score:3, Informative)

        Majority of people, who hit something they did not see, would attempt to pull over just like AI did.
        • by martin-boundary ( 547041 ) on Monday November 20, 2023 @02:35AM (#64017605)
          Wrong. They would hear the bone curdling screams as they were trying to slow down and assess, maybe lean or step out of the car to see, then decide what the most appropriate thing to do is.

          It's not ok to drag a person around, unless it is. It's not ok to stop, unless it is. It's not ok to pull over, unless it is. It's not ok to accelerate unless it is.

          Human judgment. AI's don't have it.

        • Majority of people, who hit something they did not see, would attempt to pull over just like AI did.

          Except the AI did see it.

          It just "forgot" because the moment goes out of view for a couple of seconds it effectively doesn't exist for the AI.

          The person (assuming the person they hit was unconscious and couldn't scream) would also notice their car's driving felt off, and would quickly surmise that thing they hit was still under and stop. The AI wouldn't put those things together. It would just think "oh I have a mechanical error, better pull over!".

      • by AmiMoJo ( 196126 )

        Waymo seems to be far ahead of everyone else. If you look at their videos from 6-7 years ago, they show how their system tracks objects even when they are occluded. When it seems a pedestrian it predicts the path they will take, and if it then loses sight of them it still has a good idea of where they are likely to be.

        I find it interesting that Waymo identified the need for this long ago, but other companies either decided they didn't need it, or it just didn't occur to them and they thought that vision alo

        • by olau ( 314197 )

          I don't think this is a fair characterization.

          Although it appears the techniques in use differ, it's safe to say that Google/Waymo had a headstart, and have gotten around to solving more problems.

          For instance, for Tesla, if you listen to the first talk by Andrej Karparthy years ago, he was talking about modeling more things with neural networks, but they still haven't released a version where planning is under network control. So until now, their system hasn't actually learned to improve the driving itself,

          • by AmiMoJo ( 196126 )

            I think relying on neural networks to do things like path prediction is a bad idea. Waymo uses algorithms and they have proven to be reliable. They doubtless consume a lot less energy too, and don't require a supercomputer and gigawatt hours of electricity to train either.

            Recognition with just cameras is a real issue too. They can train their NNs to recognize certain objects and certain situations, but not give it the general intelligence needed to understand things that are well outside the training data.

        • by Viol8 ( 599362 )

          "they can always see everything they need to know about."

          Unless lts a semi trailer moving across the road, then not so much.

        • by Pimpy ( 143938 )

          Far ahead in the number of NHTSA investigations, crashes, and running people over, you mean. All of these American companies are playing fast and loose with safety, and the consequences are gradually catching up with them. Allowing these companies to self-regulate is never going to fly.

        • Waymo seems to be far ahead of everyone else. If you look at their videos from 6-7 years ago, they show how their system tracks objects even when they are occluded. When it seems a pedestrian it predicts the path they will take, and if it then loses sight of them it still has a good idea of where they are likely to be.

          I find it interesting that Waymo identified the need for this long ago, but other companies either decided they didn't need it, or it just didn't occur to them and they thought that vision alone would be enough. Tesla seem to be in the latter category, with the assumption being that with enough cameras they can always see everything they need to know about.

          I find it notable how Waymo stays under the radar, so it does seem like they're being quite a bit safer.

          I still think occlusion is a fundamentally tough problem. Consider someone carrying a large umbrella, the car could easily see that as a separate object to track (even a potential person). Then, the person folds up the umbrella and poof, the object is gone.

          Now, instead of an umbrella assume it was a toddler who got picked up (and has vanished according to the car), and they might get put back down at any

      • by mjwx ( 966435 )
        Forget the "identifying a child" part. "AI" can't even make simple decisions like determining who should go first when two cars enter an intersection. Given the default option is to just stop, this leads to gridlock.
  • Who gives a shit about a CEO? Who should give a shit about a CEO? No one.

  • He moved fast and broke his career

  • by backslashdot ( 95548 ) on Sunday November 19, 2023 @11:54PM (#64017469)

    What is this Robotaxi BS? .. GM can't even do ADAS in their cars. Make ADAS a thing and then get autonomous done.

    • by eepok ( 545733 )

      Exactly.

      The industry fallaciously believed they could leapfrog 10-20 years of technological development, social acclimation, and legal precedence and go from "automobiles with no driver assistance" to "fully autonomous vehicles". It was a stupid assertion 10 years ago and it's a stupid assertion today in the age of "AI" (for very loose definitions of "AI"). Here are the problems:

      1. Driving is HARD.

      Choosing to take actions while driving is easy, but taking in the MASSIVE amount of data that humans do while d

  • Musk also keeps promising it's just around the corner too.

    I'm sure more controlled environments, like planes and trains, and maybe long-haul road, will make headway but the average car is not that.

    • by Khyber ( 864651 )

      Planes and trains are pretty much fully-automated as-is. Long-haul road is coming, already seen plenty of driverless trucks on the 15 between Barstow and Vegas.

      • Planes are fully automated in the portions of the route they fly far away from anything, or where an ATC has cleared a path for them to use. Trains are automated because they are tracked by a centralized system at all times. Neither setup works very well for cars on congested public roads that are governed by the traffic rules that humans use.

        • That's why we should use elevated PRT hanging from a monorail. Then the cars don't have to be self-driving, the system can drive them, you eliminate the loss of the pneumatic tires, instead of these long ribbons of asphalt that cost so much to build and maintain you have pylons and a narrow ribbon of steel. Cars and roads brought us a long way, but they ceased to make sense decades ago. We have the technology for self driving, and it is called rail.

      • The issue with planes and trains is the same as this robotic thought. AIs today are great at mimicking your sympathetic nervous system. If you could do it automatically without thinking, an AI can probably do it too. As soon as things go outside the norm though, as soon as you actually need to think, because youâ(TM)ve not encountered this scenario before, the AI is screwed and it wonâ(TM)t even know it.

        What do you think an AI would have done in Sullyâ(TM)s place? What about for AirFrance

        • Sully: Birds!

          AI: Branta canadensis, a.k.a. Canada geese.

          OS: Both engines appear to be offline: Abort, Retry, Fail?

          • Sully might be a modern day John Henry. If you recall he needed 30 seconds to analyze the situation determine what happened and the best course of action. It's not unreasonable to assume at some point an AI driven plane could determine that in milliseconds and actually have enough speed and altitude to make it to the airport instead of a risky water landing. I also think Sully was the incredible exception and not the norm.
      • I think trains certainly could be automated. Some subways and airport trains are. There's still an engineer for the train I ride when I decide I need to get out of the house for the day and venture into the city of Philadelphia for work. Of course we're talking SEPTA here.
  • A video, which TechCrunch also viewed, showed the robotaxi braking aggressively and coming to a stop over the woman. The DMV's order of suspension stated that Cruise withheld about seven seconds of video footage, which showed the robotaxi then attempting to pull over and subsequently dragging the woman 20 feet...

    A woman died in that hit and run meaning someone is facing serious criminal charges.

    Cruise withheld the 7 seconds from the DMV, did they also withhold it from the detectives? Because presumably that

    • Can you give a source for this? The original article stated that the condition of the woman in the hospital is critical.. the latest I could find was from october 24, from the company's statement "First and foremost, our thoughts are with the individual, and we are hoping for their complete recovery."

  • How's the lawsuit from that woman coming along? With regulators and new liability requirements coming in soon, sounds like he's decided he's made enough money and is a good idea to skedaddle.

    Would probably sell any stock he has too, except he would have to file announcements with the SEC that would draw attention to it.

    • I wonder how much of this is because of GM? I have no idea if this guy started out virtuous or not, but maybe GM came in and told him to take some shortcuts which he didn't like - and now there are suits against the company which he doesn't feel like he ought to defend.

  • by Anonymous Coward

    According to the DMV, Cruise representatives showed video footage of the crash captured by the robotaxi’s onboard cameras only leading up to the point where the driverless vehicle made its first complete stop after braking hard.

    “Cruise did not disclose that any additional movement of the vehicle had occurred after the initial stop of the vehicle,” the DMV said in its Order of Suspension. The DMV alleges it received from Cruise the full footage of the video on Oct. 13 — 11 days after

  • To me it doesn't even sound like Cruise bothered to gather all the designers into a single room and have a brain storming sessions about the main things that could go wrong while driving, such as a person being somewhere dangerous to them that is off camera, or something unexpected happening to a pedestrian not on the AI car about on some other car on the road with them. If they had done that, they would realize that they need at least five times more cameras looking everywhere and much better AI.
  • Self-driving cars. You'll never be able to quantify how many lives they've actually saved until their in mass operation. Which will be massive amounts of people because - let's face it. People drive like idiots (not me - I'm perfect). You'll never get to mass operation because you can only count the people they've killed by not knowing any better and just doing what they've been trained to do. This was ultimately caused by a human running into a person in the first place and unexpectedly throwing a pede
  • Autonomous vehicles are a luxury promoted as necessity.

    There is enough money in play that casualties are not a moral concern but merely a financial risk when they generate negative publicity.

  • How much did he finally walk away with from this scam?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...