Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Google

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 29

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.
DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.

The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms

Comments Filter:
  • by martin-boundary ( 547041 ) on Wednesday May 14, 2025 @11:36PM (#65377709)
    Rejection sampling on Python code.
    • Re:TL;DR (Score:5, Informative)

      by CaptQuark ( 2706165 ) on Thursday May 15, 2025 @02:48AM (#65377877)

      Matt Parker of StandUp Maths did an overview of the new findings from AlphaEvolve. Some of the findings are very helpful and some are "new ways to pack circles in a box" variety.

      The video explains how even a 1% better algorithm can be worth the effort if it translates to less processing needed or more efficient data centers. Worth a quick viewing.

      https://www.youtube.com/watch?v=sGCmu7YKgPA [youtube.com]

      • Matt Parker is great! He definitely should go on holiday more!
      • The problem is we don't really know what it is doing. It found a new algorithm saving money in data centers, Great! But there was also significant input from a human expert during the process. How much was from the human, and how much was from the AI?

        We don't know.
        • For now it is created by AI empowered humans. In the future... who knows. I did not think that I will live long enough to see what Ray Kurzweil calls the singularity unfolding. Now I am both awed and scared shitless at the same time that this will happen in my lifetime after all.

          • Oh I think you brought up some verboten words for /., namely kurzweil and singularity... but....
            I think there is a lot of interpretation required but... doesn't the singularity mean the point past which no predictions of the outcomes are possible ? If my understanding is even close , I'd say we're there now.

            Personally, I don't find anyone's predictions credible, and with the feedback loops of generative ai in the ecosystem, things seems to just get wonkier by the day.

            Wellll... anyone's predictions except
  • by ebunga ( 95613 ) on Wednesday May 14, 2025 @11:49PM (#65377729)

    If this is as good as they say it is, Google will surely kill it like they've killed every other good product they acquired or some how actually managed to create.

  • by NewID_of_Ami.One ( 9578152 ) on Wednesday May 14, 2025 @11:57PM (#65377731)

    Any actually good AI / AGI that can move towards singularity by creating better AI /AGI (ad infinitum) can be recognized by the fact that it will not be publicized or released. Like any good trading strategy.

    If it's publicized its to sell you courses on Instagram or make ghibilis and memes and chat bots which might be better than yesterday's search engines.

    No one discloses they've found this buried treasure so the whole town can come get their share. Except if they are selling shovels offcourse. And there's no actual treasure just a few trinkets they buried

    • by HiThere ( 15173 )

      Well, they're not releasing this one. "It's too complex, maybe later". So by that argument this may be a good one. What they're doing is running large scale tests, and a bit of PR.

  • Do you want skynet? (Score:5, Interesting)

    by Gravis Zero ( 934156 ) on Wednesday May 14, 2025 @11:59PM (#65377735)

    This is how it starts. You ask it to solve climate change and it starts by designing a helper robot to do work so that people don't have to drive to work. Next thing you know, it's taking out the root cause of climate change: humans. Those helpers don't need to work for humans if there are no humans and now they're all terminators.

    THE END IS NIGH!

    Now if you'll excuse me, I'm going to do the rational thing: warn everyone by painting that very message on some cardboard and yelling at them from the roadside.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Now if you'll excuse me, I'm going to do the rational thing: warn everyone by painting that very message on some cardboard and yelling at them from the roadside.

      Sounds like the perfect job to be outsourced to a robot

    • You realize that this is DeepMind marketing it's own product, right? And that advertisements always over-promise and under-deliver? What is "Super" AI anyway? That's nothing more than marketing-speak.

  • LLMs do not hallucinate because of their "non-deterministic nature".
    Decoding strategy is the only (potentially) non-deterministic step in token generation, and it can be deterministic, or not- operator choice.
    Deterministic logit sampling (greedy) will still hallucinate.
  • and then pretending that AI somehow automagically found a better way.

    • Re: (Score:3, Interesting)

      Coming from the group of people that made AlphaFold and AlphaZero, no, you're almost certainly talking out of your ass.
  • THAT is what it has it focusing on, circumvent the Google Ads and Spyware machine. Because they feed each other. Spyware feeds ads, Ads feed Google revenue. And MV3 is key to making sure that Ad and Tracking blockers don't interfere!
  • by gweihir ( 88907 )

    Seriously, these gross lies should not even get reported anymore at this time.

  • If it is trained on human sourced data, it can only create new algorithms based on prior art. Convince me otherwise.

    • by HiThere ( 15173 )

      You've just described all human culture. I can't convince you otherwise because you're correct, you're just ignoring the fact that humans work the same way (at that gross a level of description).

    • "Standing on the shoulders of giants." All progress is based on the works that came before.

    • I have an argument. A LLM can encode the combined knowledge of several or many, experts. So some portion of that knowledge is things you never considered or knew. So the AI could propose things you never considered.

      I'm not suggesting it's creative but it could know more than you do.
  • As new devices come along, there is a constant need for drivers. Some of these have very detailed data, others extend existing standards.

    Other components, and this is the Holy Grail of such efforts, are totally closed source. Apple, for instance revealed nothing about their M1 GPU officially. But a set of header files was leaked, which allowed diligent reverse engineering by Asahi Linux of a good degree of control over the M1 & M2 GPUs. But the M3 & M4 GPUs are unknown quantities still.

    Apple a
  • Is Google claiming it achieved meta-cognition?
  • That sounds like a hyperventilating car salesman from the 1970s.

    "Our product is so advanced, it's SUPER advanced."

    Marketing people are going to do what marketing people do.

  • An algorithm is nothing more than a pattern of steps to achieve a goal. AIs are all about generating patterns that seem plausible, whether it's image generation, audio generation, text generation, or...algorithm generation. Just another kind of pattern to simulate.

    My question would be, do these algorithms actually work? Are they actually any better than the ones we already have? Another aspect of AIs is that they hallucinate. There's no reason an algorithm-generating AI wouldn't hallucinate. You know, like

A man is not complete until he is married -- then he is finished.

Working...