Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Space

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics (theguardian.com) 96

An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse.

"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."

The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together.
Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."

The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."

Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"
This discussion has been archived. No new comments can be posted.

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics

Comments Filter:
  • Conned (Score:2, Redundant)

    by Darinbob ( 1142669 )

    Everybody at CERN today is vastly more intelligent than the happy chat bot who wants to be your friend. And yet they think this toy is going to figure things out? If the math is too hard then use Mathematica, don't rely upon an AI that doesn't know what true or false means.

    This new Gen AI is not designed to do "measuring", much less measuring of Higgs boson self coupling... "We asked it 5 times, and were given the answers of 1, -1, 41, Epstein's constant, and apparently "up yours nerd."

    • Are you serious? (Score:3, Informative)

      by Gravis Zero ( 934156 )

      The AI he's talking about isn't LLMs. Take a look at AlphaFold [wikipedia.org] and then shut up because you're making your ignorance on this topic very clear to everyone.

      • AlphaFold isn't really AI, even if Google says it is. It's protein folding. This tech has been around awhile. Yes, it's great it can search lots of permutations and has access to a huge database, but that's just scaling up algorithms we've already known about.

        • Re:Are you serious? (Score:5, Informative)

          by DamnOregonian ( 963763 ) on Tuesday February 04, 2025 @02:05AM (#65140247)
          Patently false. [nature.com]

          AlphaFold excels precisely because it isn't just scaling up algorithms we've already known about.
          It is a neural network that was trained with the output from those algorithms.
          Neural Networks are universal function approximators. This is also called Machine Learning, or Artificial Intelligence.
          The function that is learned in the NN exceeds the performance of all known algorithms.
          • by martin-boundary ( 547041 ) on Tuesday February 04, 2025 @03:44AM (#65140323)
            Universal function approximators have existed since the 1800s. There are many families of them, not just NNs. Are the NNs the best? Arguably not.

            By far the most successful and fundamental family of universal approximators are the Fourier bases. They have revolutionized Science for 200 years, and there are deep (not in the NN sense) reasons why they are so fundamental. In particular, they have direct connections with physical reality, unlike NNs.

            NNs are nice for all sorts of reasons, but they also are very much ad-hoc constructions without strong connection (not in the NN sense) to the underlying problems. We do have a lot of off-the-shelf tools available for them, though, which is great, and the finer analysis of convergence flaws is mostly lacking so far. That means a lot of graduate students use them inappropriately, but then again, somebody's got to try it, or we'll never know. It also means that we don't yet know what a good set of abstractions and APIs for them is. Eventually, we'll just put them in common math libraries together with Chebyshev polynomials and what not..

            The statement about exceeding all known algorithms is hyperbolic. The alpha fold team succeeded by trying something new and having modern computer resources available which previous generations didn't. Well done regardless! It's best not to think of a NN model in the same way as a CS algorithm though, they really just do high dimensional curve fitting.

            • You appear to lack even a basic notion of neural networks, e.g., you completely left feed-forward/back out of your ignorant little screed.

          • by Viol8 ( 599362 )

            Its still got no intuition or insight. Its just a fancy statistical engine that may churn out variations on a theme but its not going to come up with anything new.

            • Wrong.
              • by Viol8 ( 599362 )

                Wow, thanks for your in depth response there. Its not wrong , these programs are just statistical inference systems, nothing more. They can't create insight or originality that couldn't have been done by a GA or some other algorithm given enough time.

                • "Given enough time" is doing a lot of work here. You can write a very simple program to play a 100% perfect game of chess, "given enough time" to roll out every potential move to the end of the game.

                  Especially when working with physical experimentation (which would include physics, or drug discovery) it's more like this: you have enough resources to do 100 trials (or 10, or 1000, whatever). That's it. So the better your heuristics to identify which things to try in the lab, the more scientific progress

                  • by Viol8 ( 599362 )

                    Sure, but the OP was claiming they had some kind of magic insight that would go beyond what they had learnt. They don't.

                    • There is only one person asserting magic here, and that's you.

                      I assert math. You are implying that your brain can't be modeled with math, if I'm reading you correctly, because it's uhhh, "just a fancy statistical engine".
                    • The brain works beyond maths. A genius like you will have heard of Godels incompleteness theorem, go refresh yourself. A computer is 100% deterministic. It is not capable of original thought.

                    • The brain works beyond maths.

                      Ding, ding, ding!

                      And there it is. The assertion magic.

                      You lose.

                      A genius like you will have heard of Godels incompleteness theorem

                      No, a genius like me understands Gödel's incompleteness theorems, and how they say literally nothing about your brain.

                      A computer is 100% deterministic.

                      So is your brain, asserts me, and you cannot prove otherwise.

                      It is not capable of original thought.

                      Any more than you are- I agree.
                      Nothing is original in a field of infinite noise.

                      You're in a bit over your head, here.

                    • by Viol8 ( 599362 )

                      "And there it is. The assertion magic."

                      Nothing to do with magic, however mathematics is not a complete description of the universe and there's no reason to expect evolved systems to be bounded by it like a digital computer is.

                      Can't be arsed with the rest of your pig ignorant drivel.

                    • Nothing to do with magic

                      Magic (n): the power of apparently influencing the course of events by using mysterious or supernatural forces. Your claim has everything to do with magic.

                      however mathematics is not a complete description of the universe

                      There is no evidence for this claim, and all of human history's evidence against it.

                      and there's no reason to expect evolved systems to be bounded by it like a digital computer is.

                      Except literally 100% of the assembled body of human science.

                      Can't be arsed with the rest of your pig ignorant drivel.

                      Insults from someone who thinks they're a wizard, lol.

                    • I give up. Maths fully describes the universe does it? Aesop proved otherwise if Godel is too complicated for you.

                    • Aesop proved otherwise if Godel is too complicated for you.

                      Gödel's Incompleteness Theorems do not prove the universe is not describable by math, lol, jfc.
                      All it proves, is that a system, F, cannot be used to prove itself.
                      Gödelian arguments (applying that logic to the human mind to pretend it is something metaphysical) universally require an assertion that has never been (and Gödel would say could not be) proven: That human beings can prove anything.

                      Gödel's Incompleteness Theorem, restated, without the Gödelian argument's logical error:

                    • by Viol8 ( 599362 )

                      "You're way, way, the fuck out of your league dude"

                      Oh mate, the irony :)

                      Humans invented maths, its not a property of the universe. But go back and furiously googling wikipedia again for another response, however I'm done here. Thanks for the amusment :)

                    • Oh mate, the irony :)

                      There's no irony here- your inability to see how stupid you are is easily explained by Dunning Kruger. To the contrary, it's completely expected.
                      But it's not surprising you'd find a way to misuse that word, too.

                      Humans invented maths

                      Maths is merely a language.

                      its not a property of the universe.

                      When you can place 2 marbles in a bag and end up with 3, then you'll have a leg to stand on.

                      But go back and furiously googling wikipedia again for another response

                      No need. Formal education would have put you in a place where you could effectively argue with me rather than ignorant appeals to authorities you have no actual knowledge of, lol.

                      however I'm done here.

                      So

                • Wow, thanks for your in depth response there.

                  Is there a point to one?

                  Its not wrong

                  Yes, it is.

                  these programs are just statistical inference systems, nothing more.

                  Correct- the problem is you don't really understand what that means.

                  They can't create insight or originality

                  Wrong.

                  that couldn't have been done by a GA or some other algorithm given enough time.

                  Let's reduce this:
                  They can't do anything that anything else can't do given enough time.

                  Your point?

                  • by Viol8 ( 599362 )

                    Well if you believe they're magic feel free to go buy shares in the company. In the meantime feel free to explain how an ANN thats learnt all about newtonian physics would come up with E=MC2 or quantum theory. Take your time.

                    • Well if you believe they're magic

                      No magic, whatsoever.
                      Pure math. [wikipedia.org]

                      In the meantime feel free to explain how an ANN thats learnt all about newtonian physics would come up with E=MC2 or quantum theory.

                      The same way a natural NN does.
                      By introducing noise into the inference, and with the ability to do chained reasoning.

                    • We dont even know how natural NNs work yet. Hint - they dont use back propagation to train nor need to see a million images of a cat to recognise one. Also the brain uses more than just electrical signals to function. That aside unless this ANN has the full gamut of human experience to extrapolate from, not just words, its not going to come up with anything new that isnt just a variation on a theme.

                      But believe what you like fanboy.

                    • We dont even know how natural NNs work yet.

                      Yes, we do.

                      Hint - they dont use back propagation to train

                      Completely irrelevant.
                      How the weights in the neural network are decided are not important to their function as a feed-forward network.

                      nor need to see a million images of a cat to recognise one.

                      Of course they didn't- they have a billion years of evolution setting up their base model, lol.

                      Also the brain uses more than just electrical signals to function.

                      It does, indeed. Maybe the chemicals have magical properties?

                      That aside unless this ANN has the full gamut of human experience to extrapolate from, not just words, its not going to come up with anything new that isnt just a variation on a theme.

                      Wrong.

                    • by Viol8 ( 599362 )

                      "Yes, we do."

                      I suggest you go write your white paper and present it at the next brain research conference then.

                      "Completely irrelevant."

                      Not irrelevant at all. How an NN is trained makes a big difference to the output.

                      "Of course they didn't- they have a billion years of evolution setting up their base model, lol."

                      So you admit bio NNs are completelt different in structure and function to ANNs. Glad we got there in the end. Btw - what "base model"? Do post a url explaining that. Take your time.

                      "It does, indeed.

                    • I suggest you go write your white paper and present it at the next brain research conference then.

                      Why would I do that?
                      It's studied ad nauseum. The only people with questions are philosophers uncomfortable with the lack of evidence for a soul.

                      Not irrelevant at all. How an NN is trained makes a big difference to the output.

                      Of course it makes a different to the output- it obviously does not make a difference to how it fundamentally functions.
                      Your assertion was that the ANN fundamentally cannot think, because it lacks magical gooey bits. Ergo, your assertion is irrelevant.

                      So you admit bio NNs are completelt different in structure and function to ANNs. Glad we got there in the end. Btw - what "base model"? Do post a url explaining that. Take your time.

                      Many mathematically equivalent things are different in structure and function.
                      I'm wondering if you even completed

                    • It's a general rule of thumb that the longer the reply the most drivel the poster is writing. Well done. Re einstein - whooosh. Nevermind.

                    • That's a logical fallacy so blatant, even AI identified it immediately.

                      Which is funny. LLMs are smarter than you, even without the gooey bits.

                      There's no whoosh with Einstein- you asserted that E=mc^2 was a novel thought. It was not. It was a straight derivation- one even an LLM could make under your own rules... which is kind of the point- your rules are inconsistent, essentially proving that you cannot possibly be fit to prove what goes on within an LLM, if we were to take Gödel's First Incomplete
                    • by Viol8 ( 599362 )

                      Using your logic no human has ever had an original thought, its all based on what went before. In which case I'd love to know how quantum theory or relativity is derived from banging stones on a rock because that's the logical progression you're implying. But don't worry about it, its all a bit beyond you, thats obvious. Have a nice day, I'm done.

                    • Using your logic no human has ever had an original thought, its all based on what went before.

                      All available evidence demonstrates clearly that the brain is a finite state machine- it's deterministic.
                      Why the fuck do you think progression is exponential?
                      Your postulate would have cave men deriving mass energy equivalence, lol.

                      In which case I'd love to know how quantum theory or relativity is derived from banging stones on a rock because that's the logical progression you're implying.

                      You're a moron.
                      You bang stones on a rock, and you notice that it sparks.
                      You notice that the spark can cause fire.
                      You notice that fire causes things to glow.
                      Your strawman just further outlines your stupidity.
                      You're like:
                      Cavemen bang rocks together.
                      ... ignore the proceeding

        • AlphaFold isn't really AI, even if Google says it is. It's protein folding.

          Rubbish. It is AI-directed protein folding, with demonstrably superior results to previous methods.

      • Partly right. The article does mention that they now use generative AI (aka LLM) to search their databases for unexpected phenomena. A good use for it, e.g., it doesn't much matter if it hallucinates, provided the hallucinations are statistically infrequent. Unlike the situation where GPT hallucinates a legal argument...

        • by HiThere ( 15173 )

          LLMs may be generative AI, but generative AI isn't LLM. Generative AI is a much larger set of programs, which includes LLMs as a small subset.

    • Re:Conned (Score:5, Interesting)

      by Roger W Moore ( 538166 ) on Tuesday February 04, 2025 @02:54AM (#65140281) Journal

      This new Gen AI is not designed to do "measuring"

      We do not use LLMs which, as you point out, are utterly unsuited for quantitative measurements. Instead we use machine learning techniques such as graph neural networks to identify signal patterns in the dataset using training based on either simulation or data while rejecting backgrounds. ML techniques can be incredibly powerful in terms of signal to background rejection but it's easy for things to go wrong if you e.g. accidentally include something in your simulation that is not in the data since the algorithm is really good at playing "spot the difference" and has no clue whether that difference is due to something real and physical or is just an irrelevant artifact.

      As example of this is I've seen is when the run numbers for signal and background simulation runs were different (they were based on the ID of the physics process being simulated) and the algorithm spotted it and suddenly became insanely good at separating signal from background!

      • We do not use LLMs which, as you point out, are utterly unsuited for quantitative measurements.

        That's an interesting question, actually. You could use GPT to search for new and better quantitative measurement models. I would go so far as to claim that this will soon be common, just as CERN currently uses it to search for anomalous results.

        • Using tools like perplexity.ai to find related research is normal already. Search (like scholar.google.com) is useful if you pretty much know what you want to find, but AI search is much better at filtering out things that use similar words but aren't actually relevant.
      • True, but neural nets and machine learning aren't new. We've had this stuff for quite some time. We were doing face recognition in the early 90s. What's different today is that AI is suddenly fasionable, and distinctions aren't being made between old technology being reused for new ideas (ML, protein folding) and new AI being used for ways it wasn't trained on (GPT), lumped together under the same "Hey we're an AI company!" marketing.

        For science though, I see a drawback in that neural networks, and the hig

        • For science though, I see a drawback in that neural networks, and the higher level models combining them, aren't precise.

          Actually that does not matter so long as we can measure the precision. Particle physics deals with relativistic quantum mechanical processes which are inherently random so, even at a fundamental level, we are dealing with statistical distributions. Layer on top of that detector response and even without ML we used to use frequentist and now Bayesian statistical analysis to calculate probabilites of the observed data being consistent with a given physics model. In fact the boosted decision tree ML method pr

    • Everybody at CERN today is vastly more intelligent than the happy chat bot who wants to be your friend. And yet they think this toy is going to figure things out? If the math is too hard then use Mathematica, don't rely upon an AI that doesn't know what true or false means.

      If it were generative "AI" being used to find new physics, I'd wholeheartedly agree here, but the use described in the article is to look at the events recorded by the collider's detectors, with 40 collisions per second, the vast majority of which are completely uninteresting, and sort out the very small fraction of a percentage that ARE interesting enough for a physicist to look at.

      This is the kind of pattern matching that software can be made good at. The old way to do it was to hire undergraduates ("scan

    • by habig ( 12787 )

      Particle Physics has been using machine learning since before most of the world had ever heard the term. And no, that doesn't mean LLMs, which you seem to be quoting. A decade ago now one of my experiments (the NOvA neutrinoi experiment) applied CVNs to help with pattern recognition on neutrino events, and our sensitivity improved on the same dataset by 17%. Because it turns out such things are really rather better at pattern recognition than the first-principles algorithms all our clever people had come

  • by Gravis Zero ( 934156 ) on Monday February 03, 2025 @11:22PM (#65140095)

    Advances made by AI like Alphafold have two things: a lot of data and a clear goal.

    Now, stuff like the LHC generates a shitload of data in one go but for the AI, you'll need many instances, so that means repeating experiments possibly hundreds or thousands of times. If that's viable then it's OK.

    However, the most important restriction is you need to create a clear and definable goal for the AI. AI doesn't think, so if you don't know what your data means then it could be a problem that AI isn't suited for.

    Despite all this, this is one of the few fields where I think Ai could be highly useful in making advances.

    • by ScienceBard ( 4995157 ) on Tuesday February 04, 2025 @12:56AM (#65140183)

      As someone who as a physics undergraduate worked on large collider experiments I very much believe that this could be a place where AI could be incredibly useful. Even as an undergraduate I was put to work trying to sift through "rejected" data from particle accelerators. I wrote shit code, at the time learned the basics of scientific Linux, and ultimate probably accomplished nothing. The amount of reject data is vast.

      With an AI that is basically a pattern matching on steroids I can believe science can be massively sped up. Very few undergrads know what they're looking at, even most grad students. Right now experienced researchers are pointing less senior people in the right direction and hoping someone comes up with a result in some number of years. Most graduate researchers, even at top schools, are learning their way with an advisor as a north star. AI can do what all those students are doing trivially, and in so doing massively accelerate science. It will also basically destroy the talent pipeline as conceived. I'm not sure if that's a good thing long term, but short term I do expect it to massively boost the rate of discovery.

      • As someone who as a physics undergraduate worked on large collider experiments I very much believe that this could be a place where AI could be incredibly useful. Even as an undergraduate I was put to work trying to sift through "rejected" data from particle accelerators. I wrote shit code, at the time learned the basics of scientific Linux, and ultimate probably accomplished nothing. The amount of reject data is vast.

        With an AI that is basically a pattern matching on steroids I can believe science can be massively sped up. Very few undergrads know what they're looking at, even most grad students. Right now experienced researchers are pointing less senior people in the right direction and hoping someone comes up with a result in some number of years. Most graduate researchers, even at top schools, are learning their way with an advisor as a north star. AI can do what all those students are doing trivially, and in so doing massively accelerate science. It will also basically destroy the talent pipeline as conceived. I'm not sure if that's a good thing long term, but short term I do expect it to massively boost the rate of discovery.

        I think you are partially right. I feel like AI does a great job of identifying what the relationships are in the data, which will reduce the need for humans to filter through massive quantities of data for subtle relationships. I don't think the AI is so good at the why of the relationships, which will still require experts in the field to come up with the reasons for the relationships in the data.

        Now one thing I was thinking as I was writing this is that the talent pipeline you mention may be the people

    • There was a whole lot written about this alphafold, but so far there have been no results to write home about.

      The "applications" section in wikipedia is thinner than the resume of a master's candidate and a master's candidate costs a lot less.

    • so that means repeating experiments possibly hundreds or thousands of times

      The "experiment" in the LHC is colliding protons and the LHC does that at a rate that's the best part of a billion times a second - it collides bunches of protons 40 million times a second and the luminosity is such that each bunch collision produce multiple proton-proton collisions. While the vast majority of these are strong interaction (QCD) events with little to no physics interest (unless you are a QCD person), there are still many millions of events with interesting physics in them.

      Typically the b

    • However, the most important restriction is you need to create a clear and definable goal for the AI.

      One of the problems that CERN has is that it generates HUGE amounts of data. Something like 99.9% of the data ATLAS generates is thrown away in the FPGAs.

      A few of these thrown away events are kept so that they can be reviewed to ensure that it really is uninteresting collisions.

      I can easily see how AI could be used to improve this process.

    • by allo ( 1728082 )

      No need for a clear goal. Many AI methods have a large focus on pattern matching and clustering.

      Now let's say you take all these data points and just run an unsupervised clustering on them. Afterward a human picks out a few examples from each cluster and looks what they have in common. Already what kind of data is clustered may be a scientific discovery (X correlates with Y), but one may also find interesting starting points for further research once one has brought a bit of order into the huge pile of data

  • Even cavemen had the urge to bang rocks together.

    These physicists are no different, they just have bigger boom sticks.

    --
    You know, I just do whatever feels right to me! - Bruno Mars

    • One other difference is that before banging rocks together, we're usually expected to submit a formal paper discussing what we want to bang, how we want to bang it, what pieces we expect to see and why.

      Oh, and the stones are much, much smaller although in terms of mass equivalence we're slowly getting to the size of a pebble.

      • discussing what we want to bang, how we want to bang it, what pieces we expect to see and why.

        I can think of at least one other industry where that process is very common....

  • by mattr ( 78516 ) <.mattr. .at. .telebody.com.> on Tuesday February 04, 2025 @12:21AM (#65140139) Homepage Journal

    tldr; just from the summary this guy is spouting so many snake oily things I don't trust him. We can produce dark matter, We can do 20 times better in 10 years so we advanced 20 years, If you have a big complex problem and throw a complex technique at it you're gonna win, etc. It sounds like little sound bites greatly distorting some kernel of truth that can't get past the oil he has to coat it with to get the people with the money to listen, unfortunately. I expect some kind of AI, or machine learning, already is helping a lot and likely will help a lot more. Maybe as transformative as he suggests, or at least reducing grunt work. But it comes out of the was smelling like "we're also on the cutting edge so give us your money". It's wierd.

  • ...we solved Fermi's Para ^~7 #& ` NO CARRIER

  • ...not the current crop of crap generators

  • Somebody needs to raise lots and lots of money for the next particle accelerator. I am as enthusiastic as the next guy when it comes to fundamental research; however, when they tell me that they need billions to see what is out there without any theoretical guidance, I cringe. And don't give me any crap about supersymmetry, which nature obviously chose not to use.
  • ...3...2...1 ... hey, is it still not there [youtube.com], yet?

Help fight continental drift.

Working...