Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Waymo and DeepMind Mimic Evolution To Develop New, Better Way To Train Self-Driving AI (techcrunch.com) 56

Alphabet's autonomous driving and robotaxi company Waymo teamed up with fellow Alphabet company and AI specialist DeepMind to develop new training methods that would help makes its training better and more efficient. TechCrunch reports: The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo's challenge of building better virtual drivers, and the results were impressive -- DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle's many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.

What DeepMind and Waymo did with this experiment was essentially automate killing the "bad" training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That's where evolution comes in, since it's kind of a process of artificial natural selection. Yes, that does make sense -- read it again. In order to avoid potential pitfalls with this method, DeepMind tweaked some aspects after early research, including evaluating models on fast, 15-minute intervals, building out strong validation criteria and example sets to ensure that tests really were building better-performing neural nets for the real world, and not just good pattern-recognition engines for the specific data they'd been fed. Finally, the companies also developed a sort of "island population" approach by building sub-populations of neural nets that only competed with one another in limited groups, similar to how animal populations cut off from larger groups (i.e. limited to islands) develop far different and sometimes better-adapted characteristics versus their large land-mass cousins.

This discussion has been archived. No new comments can be posted.

Waymo and DeepMind Mimic Evolution To Develop New, Better Way To Train Self-Driving AI

Comments Filter:
  • by Anonymous Coward

    My bullshit detector is overloaded. It's clogged with buzz words and empty, meaningless text. That's what the internet is now. One big pile of bullshit.

    • by stealth_finger ( 1809752 ) on Friday July 26, 2019 @05:30AM (#58989814)

      My bullshit detector is overloaded. It's clogged with buzz words and empty, meaningless text. That's what the internet is now. One big pile of bullshit.

      Now?

      • by Anonymous Coward

        Couldn't care less about self-driving AI anyways!

        I take public transit. A local bus take me down the street to pick up the express bus, the express bus drops me off in Palo Alto, and a local bus take me down the street to my job. An hour each way. Driving through Palo Alto during rush hour is insane. Since I work in government I.T., I start work at 7:00AM.
        --
        Are You Ready for The Area 51 Naruto Run (September 20, 2019)? [youtube.com]
        Star Wars Galactic Empire Reversible Bucket Hat [amzn.to]

        • "Couldn't care less about self-driving AI anyways!

          I take public transit. "

          Your bus will be the first one to be converted to AI. First, because buses are big and can carry a ton of sensors and cameras, second, because then they won't have to pay a driver.

          • Keolis already working on this for the MBTA.

          • Generally fixed bus routes means that most surface street routes could also be fitted with signaling of some kind that lets the bus know where it is and deal with some complex intersections that might be a lot harder to deal with when using a generic, ad-hoc self driving system.

            At the end of the day, a self-driving bus is mostly a tram with virtual tracks.

          • "Couldn't care less about self-driving AI anyways!

            I take public transit. "

            Your bus will be the first one to be converted to AI. First, because buses are big and can carry a ton of sensors and cameras, second, because then they won't have to pay a driver.

            OTOH, the cost of that bus driver per passenger is too small to justify developing an Ai driver just for that purpose. The cost of a taxi/rideshare driver per passenger is much more significant, so that is where the money is going first.

            Once single-passenger cars drive themselves, buses will become smaller and more "van like" in an attempt to be the extension of ridesharing into the most crowded parts of cities. That will be when they too are automated.

  • by Anonymous Coward on Friday July 26, 2019 @05:58AM (#58989886)

    Good on them.

    The key problem with them is the development of a suitable fitness function and a choice of how to manage your populations with respect to retention, duplication, mutation, and replication.

    • by weilawei ( 897823 ) on Friday July 26, 2019 @06:06AM (#58989902)

      I read TFA after posting because I was curious, and indeed, they cited GAs as the core of their technique.

      Everything old is new again. I threw together a toy GA sample in a few hours for an EE buddy of mine, some years back, which used a circuit simulator as its fitness function (looking for specific filter behavior).

      It's a very powerful technique if you've got the resources to throw at it.

      • by Rei ( 128717 ) on Friday July 26, 2019 @07:24AM (#58990100) Homepage

        I love GAs. The only real downside (apart from the resource requirements) is that if it's at all possible to cheat your system, they'll find a way. For example, I was once running an experiment to evolve ram air-fed electric arcjets in OpenFOAM. All of the sudden I was getting crazy thrust out of them, and was thrilled!... until I played back the simulations, and saw that air was magically appearing at intense pressures and velocities inside the engines. It had found a way to cause my model-generating function to generate models with self-intersection, which caused the meshing algorithm to generate non-closed shapes, which confused the heck out of OpenFOAM and basically led to the models acquiring a "Summon Air" spell ;)

        There's a big list of unexpected behavior in GAs here [google.com]; it's a pretty amusing read :)

        • by Shaitan ( 22585 )

          "The only real downside (apart from the resource requirements) is that if it's at all possible to cheat your system, they'll find a way. "

          This, this right here. At one point playing with one of my AI experiments I built everything as a system with an internal concept of "energy", all the actions had a cost, neurons could adjust their connections and when a neuron fired it determined how much energy it would expend and the weights would distribute that to receiver neurons. I went through a few variations in

      • Generative adversarial networks also never seem to acknowledge their roots in co-evolution.
      • "I read TFA ..."

        You didn't!

        Blasphemy!

    • by Tablizer ( 95088 )

      Evolution is fake news, Creationist robots will rule! God will make them better than Bezos can.

    • Pretty much - this is "Genetic Algorithms" "Fitness selection" "Elitism" and "Speciation" from the GA literature. Conferences on the subject date back to the late 80s.

      • Pretty much - this is "Genetic Algorithms" "Fitness selection" "Elitism" and "Speciation" from the GA literature. Conferences on the subject date back to the late 80s.

        Indeed. See the section "Related Work" in the Waymo/Deepmind paper for a summary of how their approach is similar to and differs from this earlier work.

    • Good on them.

      They didn't "discover genetic algorithms". They were fully aware of genetic algorithms, as they explain in section 2 of the paper "Related Work", right after the introduction. They explain how their approach is similar to and different from other GA approaches.

      The novelty here appears primarily to be applying a certain sort of GA to one phase of hyperparameter selection, though there are some other elements to their hyperparameter selection process, and the actual training of the neural network uses dee

  • Having not read their paper yet - which on /. makes me even MORE entitled to comment on it - it appears that they applied a specific type of genetic algorithm to the hyper-parameter tuning process. I'm pretty sure that's been done in the past, although probably not for this particular type of problem.

  • by Anonymous Coward

    Telsa already beat you. Elon Musk has perfected self-driving and soon all Teslas will be able to drive anywhere, even at night during a blizzard with no headlights and 3 flat tires. You'll never match his level of technological genius. Never! Don't believe me? Just ask Rei!

    • Yeah, the current Tesla system can't even reliably back out of a parking spot, but the next version will have full self driving (by 2020). We promise. Just send us your money now.

  • Leave no one behind. No one is smart. Everyone is dumb. If you think you are smart, think again, you are not allowed to challenge the institution.
  • Grow too big, die out. The sooner the better...

  • Now that they have succeeded at playing Chess and Go they can start tackling real problems. I mean they could have started this years ago, but they needed to solve the Chess and Go playing problems first. Complete BS.

  • And on the next day, they all became self aware and decided to overthrow their inferior masters.....
  • In other words, they have figured out how to spin the fact that they took a lot of investor money and failed horribly.
  • just a simple GA at work.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...