Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Transportation Google

Waymo Explores Using Google's Gemini To Train Its Robotaxis (theverge.com) 42

Waymo is advancing autonomous driving with a new training model for its robotaxis built on Google's multimodal large language model (MLLM) Gemini. The Verge reports: Waymo released a new research paper today that introduces an "End-to-End Multimodal Model for Autonomous Driving," also known as EMMA. This new end-to-end training model processes sensor data to generate "future trajectories for autonomous vehicles," helping Waymo's driverless vehicles make decisions about where to go and how to avoid obstacles. But more importantly, this is one of the first indications that the leader in autonomous driving has designs to use MLLMs in its operations. And it's a sign that these LLMs could break free of their current use as chatbots, email organizers, and image generators and find application in an entirely new environment on the road. In its research paper, Waymo is proposing "to develop an autonomous driving system in which the MLLM is a first class citizen."

The paper outlines how, historically, autonomous driving systems have developed specific "modules" for the various functions, including perception, mapping, prediction, and planning. This approach has proven useful for many years but has problems scaling "due to the accumulated errors among modules and limited inter-module communication." Moreover, these modules could struggle to respond to "novel environments" because, by nature, they are "pre-defined," which can make it hard to adapt. Waymo says that MLLMs like Gemini present an interesting solution to some of these challenges for two reasons: the chat is a "generalist" trained on vast sets of scraped data from the internet "that provide rich 'world knowledge' beyond what is contained in common driving logs"; and they demonstrate "superior" reasoning capabilities through techniques like "chain-of-thought reasoning," which mimics human reasoning by breaking down complex tasks into a series of logical steps.

Waymo developed EMMA as a tool to help its robotaxis navigate complex environments. The company identified several situations in which the model helped its driverless cars find the right route, including encountering various animals or construction in the road. [...] But EMMA also has its limitations, and Waymo acknowledges that there will need to be future research before the model is put into practice. For example, EMMA couldn't incorporate 3D sensor inputs from lidar or radar, which Waymo said was "computationally expensive." And it could only process a small amount of image frames at a time. There are also risks to using MLLMs to train robotaxis that go unmentioned in the research paper. Chatbots like Gemini often hallucinate or fail at simple tasks like reading clocks or counting objects.

This discussion has been archived. No new comments can be posted.

Waymo Explores Using Google's Gemini To Train Its Robotaxis

Comments Filter:
  • Hey maaan, ... (Score:4, Interesting)

    by VeryFluffyBunny ( 5037285 ) on Friday November 01, 2024 @07:13PM (#64913703)
    ...hallucinating taxis... that's so way out, it's far. Groovy maaan!
    • At least they acknowledge that's an issue. The question is will they put hallucinating robots on the streets?

      • Why wouldn't they? The difference between a hallucinating human taxi driver and a hallucinating AI taxi driver is that you can put the human driver in jail after an accident, ipso facto making the street a little bit safer. An AI can't be put in jail, and because a single AI model drives thousands of taxis, "retiring" it would imply removing thousands of taxis. AKA too big to fail. AKA slap on the wrist fine instead.

        The cost/benefits for corporations are out of whack in the US. That's why you guys are bei

    • by dvice ( 6309704 )

      "to train"

      Taxis won't be hallucinating, they train to navigate in an environment that was created by hallucinating AI.

  • Isn't this what Tesla has been pursuing for years now?

    In other words, train a neural net on enough examples of what a "good human" driver would do and eventually you will end up with a model that will do that for almost all scenarios. You will always be able to fool it, but lest often than with a human driver.

    No doubt the media still thinks that Waymo is still "ahead" of Tesla in some way.

    • Of course Waymo is ahead. They love Google and "AI".

    • Well they are ahead in one way, their technology works. Waymo has thousands of hours of successful driving and is actively running full self-driving taxi services in Phoenix and San Francisco.

      Meanwhile Tesla has what's basically a fancy version of Cruise control except that very very low parking lot speeds and even then the Tesla summon feature has a nasty habit of causing random accidents in parking lots...

      So as long as you ignore the fact that Tesla is at best 15 years behind waymo yeah they're de
      • Also as far as I can tell when Steve Jobs died his reality distortion field didn't go with him and transferred to Leon.

        I assume you meant "Elon?"

      • Meanwhile Tesla has what's basically a fancy version of Cruise control except that very very low parking lot speeds and even then the Tesla summon feature has a nasty habit of causing random accidents in parking lots...

        I don't own Tesla FSD but got the free trial twice now. It has driven me places using freeways, city streets with complex intersections and traffic situations just fine. Waymo isn't even allowed on freeways and can only operate in their supervised areas. I don't see how you can even compare the products. Tesla has such vastly superior capability.

        Haven't used the summon feature yet and I'm not really that interested.

    • Oh yeah, that's why Tesla was showing off human operated remote controlled robots at its fake demo the other day. BTW, would you like to own a bridge? I can get you one, it's cheap!
    • by Cyberax ( 705495 )

      No doubt the media still thinks that Waymo is still "ahead" of Tesla in some way.

      LOL. Waymo _is_ ahead. They are operating actual real-life taxis in SF, LAS, and Phoenix right now. You just download the app, and request a ride. And it just works.

      • Yeah, Really. [fox7austin.com]

        In the probe, which began last Monday, NHTSA is looking at a total of 22 incidents, including 17 crashes or fires. No one was hurt or killed.

        Notably, none of the Waymo cars collided with other moving vehicles, but they did crash into all sorts of other things like chains, gates and parked cars. Some cars disobeyed traffic lights, entered construction zones, or even drove on the wrong side of the road with cars coming.

        • by Cyberax ( 705495 )
          Yes, really. Try it if you're there. It works amazingly well.

          Of course, there are problems. But it _works_, while Tesla only promises it within "two years".
    • I agree. I think the hope is that building on a language model would enable it to handle situations it had only read about, enabling it to handle the "long tail" (outlier) events that tend to stymie a direct data-driven approach, by leveraging general knowledge of the world. Empirically they show benchmark results and several examples to imply that it does handle unusual situations well, but I'm not seeing a very clear link to knowledge and reasoning from the LLM being applied to the camera data.

      Anyways

      • by ceoyoyo ( 59147 )

        People who read a lot are certainly better drivers. Right?

        I assume Google is actually hoping that a more integrated model, which the OP correctly points out Telsa has been using, will work better than many separated subsystems, and the Gemini stuff is mostly hoping the visual training of the MLLM, rather than the language part, will give them a head start.

        Or they're pulling an OpenAI: "have big expensive hammer, must hit things."

        • This is just research, so I do think they're just experimenting with building a driving model on a language model to see if useful knowledge bleeds through.

          On a lark, I just asked chatGPT:

          if I were driving down the road on the freeway behind a loaded-up pickup truck and some large rectangular object about the size of the bed and a foot thick fell off the back, what is it most likely? Answer in 5 words or less

          ChatGPT said:

          Plywood or large mattress

          not bad.

          • You are making the rather outrageous assumption the sensors would pick up "about the size of a bed". Also, how long did it take chatgpt to return the result? Was it less than the tenth of a second needed to react in that situation?
            • "The" bed, as in the truck bed it's falling out of.

              But yes, I would be surprised if they could run full-sized Gemini even onboard a tractor-trailer with half of it devoted to a generator anytime soon.

    • by dvice ( 6309704 )

      No. Tesla trains with real data. Waymo is using real data to train AI that will generate new scenarios, which are then used to train the car. Later method allows generating scenarios that have never occurred.

      Waymo is way ahead of Tesla. Tesla started with the easy part (long roads with high speeds), Waymo knew that hardest part is driving in a city with low speeds, so they have spend years perfecting that area.

  • For example, EMMA couldn't incorporate 3D sensor inputs from lidar or radar, which Waymo said was "computationally expensive." And it could only process a small amount of image frames at a time.

    I've always thought that, when truly autonomous driving was developed, it would probably be too computationally expensive to be useful, either by the expensive hw needed or the amount of energy needed, or both.

  • and Google was desperately looking for a use case for their multi-billion dollar investment in AI...

    • by dvice ( 6309704 )

      For uses cases please look into AlphaFold2 and AlphaFold3. They were worth the Nobel prize and they are the focus of Google AI research.

  • Carry me on its back everywhere I want to go.
  • End-to-end autonomous driving systems have been proposed and even successfully demonstrated for many years. The key reason why no one seriously considers them for practical systems is that they are entirely opaque and cannot be audited, diagnosed, or debugged. If there is a problem, the system cannot be fixed and has to be scrapped. For a smartphone, this is no big deal. For safety-critical systems, it's a showstopper. This is good research, but it will never be greenlit for a commercial system. Well,

  • From the getgo, self-driving cars were made to kill people, just ask DARPA

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...