

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 11
An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.
According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results. DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.
The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results. DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.
The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
TL;DR (Score:3)
Re: (Score:2)
Matt Parker of StandUp Maths did an overview of the new findings from AlphaEvolve. Some of the findings are very helpful and some are "new ways to pack circles in a box" variety.
The video explains how even a 1% better algorithm can be worth the effort if it translates to less processing needed or more efficient data centers. Worth a quick viewing.
https://www.youtube.com/watch?v=sGCmu7YKgPA [youtube.com]
Re: (Score:2)
How much longer until they cancel it? (Score:3)
If this is as good as they say it is, Google will surely kill it like they've killed every other good product they acquired or some how actually managed to create.
Re: (Score:2)
At least wait until it has resolved the crisis in cosmology.
If it's publicized or released it's not good AI bu (Score:2)
Any actually good AI / AGI that can move towards singularity by creating better AI /AGI (ad infinitum) can be recognized by the fact that it will not be publicized or released. Like any good trading strategy.
If it's publicized its to sell you courses on Instagram or make ghibilis and memes and chat bots which might be better than yesterday's search engines.
No one discloses they've found this buried treasure so the whole town can come get their share. Except if they are selling shovels offcourse. And there's
Do you want skynet? (Score:5, Interesting)
This is how it starts. You ask it to solve climate change and it starts by designing a helper robot to do work so that people don't have to drive to work. Next thing you know, it's taking out the root cause of climate change: humans. Those helpers don't need to work for humans if there are no humans and now they're all terminators.
THE END IS NIGH!
Now if you'll excuse me, I'm going to do the rational thing: warn everyone by painting that very message on some cardboard and yelling at them from the roadside.
Surprise, article gets basic aspect of LLMs wrong. (Score:2)
Decoding strategy is the only (potentially) non-deterministic step in token generation, and it can be deterministic, or not- operator choice.
Deterministic logit sampling (greedy) will still hallucinate.
Enough calculation power to try every combination (Score:2)
and then pretending that AI somehow automagically found a better way.
Re: (Score:2)