

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics (theguardian.com) 96
An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse.
"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."
The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together. Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."
The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."
Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"
"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."
The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together. Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."
The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."
Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"
Re: (Score:3)
How do you know this?
Re: (Score:2)
I don't know what trump and musk will do, but spending billions on digging more tunnels [home.cern] does indeed seem a bit [science.org] excessive. [spiedigitallibrary.org]
Re: (Score:2)
Building a bigger collider is necessary in order to get to those new, unexplored energy levels where answers to some perplexing questions may be lurking. Same idea as getting into space, right? It's not something we need to do, it's what we want to do. Or climbing mount Everest if you need something more down to earth.
Re: (Score:2)
I don't know what trump and musk will do, but
Since CERN is not American, I expect they won't do anything.
(CERN = "Conseil Européen pour la Recherche Nucléaire": in English, that would be European Center for Nuclear Research.)
What was that brain fart about? (Score:2)
If you have to feed the trolls' sock puppets, can't you at least make the vacuous Subjects less meaningless?
Re: (Score:1)
Re: (Score:3)
Trump and Musk would shut this down in a heartbeat.
Uh, you do know what CERN stands for, right? This isn't funded by the US or any of our business.
Re: (Score:2)
Funny, here's a press release [home.cern] from CERN from clear back in the '90s touting funding they are getting from the US ($531M for the LHC). That's just one of several projects that the US helped fund at CERN. Yeah, the US isn't a member of CERN but that doesn't stop CERN from asking them for money.
Re: (Score:2)
Don't take this too seriously, but...
Well, I, also, think it's a bad idea. That's because I think gravity is going to contaminate the results.
What I think they should do is build a long linear accelerator out in space. Possibly designed to be rotated so they can compare the results pointed away from (or towards) the sun against those at right angles to that. (But it might be cheaper to build two of them. I'm not sure if you could get away from having structural support or not.)
Conned (Score:2, Redundant)
Everybody at CERN today is vastly more intelligent than the happy chat bot who wants to be your friend. And yet they think this toy is going to figure things out? If the math is too hard then use Mathematica, don't rely upon an AI that doesn't know what true or false means.
This new Gen AI is not designed to do "measuring", much less measuring of Higgs boson self coupling... "We asked it 5 times, and were given the answers of 1, -1, 41, Epstein's constant, and apparently "up yours nerd."
Are you serious? (Score:3, Informative)
The AI he's talking about isn't LLMs. Take a look at AlphaFold [wikipedia.org] and then shut up because you're making your ignorance on this topic very clear to everyone.
Re: (Score:1)
AlphaFold isn't really AI, even if Google says it is. It's protein folding. This tech has been around awhile. Yes, it's great it can search lots of permutations and has access to a huge database, but that's just scaling up algorithms we've already known about.
Re:Are you serious? (Score:5, Informative)
AlphaFold excels precisely because it isn't just scaling up algorithms we've already known about.
It is a neural network that was trained with the output from those algorithms.
Neural Networks are universal function approximators. This is also called Machine Learning, or Artificial Intelligence.
The function that is learned in the NN exceeds the performance of all known algorithms.
Re:Are you serious? (Score:4, Insightful)
By far the most successful and fundamental family of universal approximators are the Fourier bases. They have revolutionized Science for 200 years, and there are deep (not in the NN sense) reasons why they are so fundamental. In particular, they have direct connections with physical reality, unlike NNs.
NNs are nice for all sorts of reasons, but they also are very much ad-hoc constructions without strong connection (not in the NN sense) to the underlying problems. We do have a lot of off-the-shelf tools available for them, though, which is great, and the finer analysis of convergence flaws is mostly lacking so far. That means a lot of graduate students use them inappropriately, but then again, somebody's got to try it, or we'll never know. It also means that we don't yet know what a good set of abstractions and APIs for them is. Eventually, we'll just put them in common math libraries together with Chebyshev polynomials and what not..
The statement about exceeding all known algorithms is hyperbolic. The alpha fold team succeeded by trying something new and having modern computer resources available which previous generations didn't. Well done regardless! It's best not to think of a NN model in the same way as a CS algorithm though, they really just do high dimensional curve fitting.
Re: (Score:1)
Re: (Score:1)
Which of the above facts I stated do you disagree with?
feed-forward neural networks are n-dimensional approximators.
The growth in complexity between them to solve for any complicated function isn't even comparable- because with the fourier basis- we don't have any computers the size of the fucking planet to approximate the folding of a protein.
Get the fuck out of here with your ignorant ass.
Re: (Score:2)
Re: (Score:2)
Pathetic.
Re: (Score:2)
Re: (Score:2)
The best you had was something about me being angry? If you indeed ever had a prof- go find him and tell him to fuck off for failing you so fucking badly.
Re: (Score:2)
You wear your ignorance on your sleeve. That much stands, indeed.
Re: (Score:2)
You can get rid of some limitations of Fourier transform by moving to Laplace transform.
You need a lot of neurons to approximate some more complicated functions. E.g. try to approximate sine function over its full domain with a 3 layer neural network. As running out of available neurons, you will very quickly see that Fourier transform is better in this case. NNs are universal appropriators but only over a limited subset of R^n.
The nice thing of Fourier transform is that the computed parameters have a very
Re: (Score:2)
You need a lot of neurons to approximate some more complicated functions.
Unfamiliar with Laplace transforms- but I'd imagine they suffer the same problem as a Fourier series, here, in that "a lot of" means exponential increase in terms needed to fit higher frequency waveforms, meaning there are a lot of waveforms where "a lot of" means something around line of "galaxy-spanning"
As running out of available neurons, you will very quickly see that Fourier transform is better in this case.
This is absolutely true.
NNs are universal appropriators but only over a limited subset of R^n.
Eh? The NN is as wide as you want it to be. I suppose there are physical limits of R^n, sure.. But I'm not sure what your point there, is?
While many approximators may converge on
Re: (Score:2)
We mod up the dude who literally made multiple factual errors, and some pretty laughable interpretive errors ("It's really just high-dimensional curve fitting!"- ya, that's called an algorithm, you fucking genius.) and we mod down the person who called them out on it.
Carry on, slashdot. Your group-think never gets old.
Re: (Score:2)
You appear to lack even a basic notion of neural networks, e.g., you completely left feed-forward/back out of your ignorant little screed.
Re: (Score:2)
Its still got no intuition or insight. Its just a fancy statistical engine that may churn out variations on a theme but its not going to come up with anything new.
Re: (Score:2)
Re: (Score:2)
Wow, thanks for your in depth response there. Its not wrong , these programs are just statistical inference systems, nothing more. They can't create insight or originality that couldn't have been done by a GA or some other algorithm given enough time.
Re: (Score:2)
Especially when working with physical experimentation (which would include physics, or drug discovery) it's more like this: you have enough resources to do 100 trials (or 10, or 1000, whatever). That's it. So the better your heuristics to identify which things to try in the lab, the more scientific progress
Re: (Score:2)
Sure, but the OP was claiming they had some kind of magic insight that would go beyond what they had learnt. They don't.
Re: (Score:2)
I assert math. You are implying that your brain can't be modeled with math, if I'm reading you correctly, because it's uhhh, "just a fancy statistical engine".
Re: Are you serious? (Score:2)
The brain works beyond maths. A genius like you will have heard of Godels incompleteness theorem, go refresh yourself. A computer is 100% deterministic. It is not capable of original thought.
Re: (Score:2)
The brain works beyond maths.
Ding, ding, ding!
And there it is. The assertion magic.
You lose.
A genius like you will have heard of Godels incompleteness theorem
No, a genius like me understands Gödel's incompleteness theorems, and how they say literally nothing about your brain.
A computer is 100% deterministic.
So is your brain, asserts me, and you cannot prove otherwise.
It is not capable of original thought.
Any more than you are- I agree.
Nothing is original in a field of infinite noise.
You're in a bit over your head, here.
Re: (Score:2)
"And there it is. The assertion magic."
Nothing to do with magic, however mathematics is not a complete description of the universe and there's no reason to expect evolved systems to be bounded by it like a digital computer is.
Can't be arsed with the rest of your pig ignorant drivel.
Re: (Score:2)
Nothing to do with magic
Magic (n): the power of apparently influencing the course of events by using mysterious or supernatural forces. Your claim has everything to do with magic.
however mathematics is not a complete description of the universe
There is no evidence for this claim, and all of human history's evidence against it.
and there's no reason to expect evolved systems to be bounded by it like a digital computer is.
Except literally 100% of the assembled body of human science.
Can't be arsed with the rest of your pig ignorant drivel.
Insults from someone who thinks they're a wizard, lol.
Re: Are you serious? (Score:2)
I give up. Maths fully describes the universe does it? Aesop proved otherwise if Godel is too complicated for you.
Re: (Score:2)
Aesop proved otherwise if Godel is too complicated for you.
Gödel's Incompleteness Theorems do not prove the universe is not describable by math, lol, jfc.
All it proves, is that a system, F, cannot be used to prove itself.
Gödelian arguments (applying that logic to the human mind to pretend it is something metaphysical) universally require an assertion that has never been (and Gödel would say could not be) proven: That human beings can prove anything.
Gödel's Incompleteness Theorem, restated, without the Gödelian argument's logical error:
Re: (Score:2)
"You're way, way, the fuck out of your league dude"
Oh mate, the irony :)
Humans invented maths, its not a property of the universe. But go back and furiously googling wikipedia again for another response, however I'm done here. Thanks for the amusment :)
Re: (Score:2)
Oh mate, the irony :)
There's no irony here- your inability to see how stupid you are is easily explained by Dunning Kruger. To the contrary, it's completely expected.
But it's not surprising you'd find a way to misuse that word, too.
Humans invented maths
Maths is merely a language.
its not a property of the universe.
When you can place 2 marbles in a bag and end up with 3, then you'll have a leg to stand on.
But go back and furiously googling wikipedia again for another response
No need. Formal education would have put you in a place where you could effectively argue with me rather than ignorant appeals to authorities you have no actual knowledge of, lol.
however I'm done here.
So
Re: (Score:2)
Wow, thanks for your in depth response there.
Is there a point to one?
Its not wrong
Yes, it is.
these programs are just statistical inference systems, nothing more.
Correct- the problem is you don't really understand what that means.
They can't create insight or originality
Wrong.
that couldn't have been done by a GA or some other algorithm given enough time.
Let's reduce this:
They can't do anything that anything else can't do given enough time.
Your point?
Re: (Score:2)
Well if you believe they're magic feel free to go buy shares in the company. In the meantime feel free to explain how an ANN thats learnt all about newtonian physics would come up with E=MC2 or quantum theory. Take your time.
Re: (Score:2)
Well if you believe they're magic
No magic, whatsoever.
Pure math. [wikipedia.org]
In the meantime feel free to explain how an ANN thats learnt all about newtonian physics would come up with E=MC2 or quantum theory.
The same way a natural NN does.
By introducing noise into the inference, and with the ability to do chained reasoning.
Re: Are you serious? (Score:2)
We dont even know how natural NNs work yet. Hint - they dont use back propagation to train nor need to see a million images of a cat to recognise one. Also the brain uses more than just electrical signals to function. That aside unless this ANN has the full gamut of human experience to extrapolate from, not just words, its not going to come up with anything new that isnt just a variation on a theme.
But believe what you like fanboy.
Re: (Score:2)
We dont even know how natural NNs work yet.
Yes, we do.
Hint - they dont use back propagation to train
Completely irrelevant.
How the weights in the neural network are decided are not important to their function as a feed-forward network.
nor need to see a million images of a cat to recognise one.
Of course they didn't- they have a billion years of evolution setting up their base model, lol.
Also the brain uses more than just electrical signals to function.
It does, indeed. Maybe the chemicals have magical properties?
That aside unless this ANN has the full gamut of human experience to extrapolate from, not just words, its not going to come up with anything new that isnt just a variation on a theme.
Wrong.
Re: (Score:2)
"Yes, we do."
I suggest you go write your white paper and present it at the next brain research conference then.
"Completely irrelevant."
Not irrelevant at all. How an NN is trained makes a big difference to the output.
"Of course they didn't- they have a billion years of evolution setting up their base model, lol."
So you admit bio NNs are completelt different in structure and function to ANNs. Glad we got there in the end. Btw - what "base model"? Do post a url explaining that. Take your time.
"It does, indeed.
Re: (Score:2)
I suggest you go write your white paper and present it at the next brain research conference then.
Why would I do that?
It's studied ad nauseum. The only people with questions are philosophers uncomfortable with the lack of evidence for a soul.
Not irrelevant at all. How an NN is trained makes a big difference to the output.
Of course it makes a different to the output- it obviously does not make a difference to how it fundamentally functions.
Your assertion was that the ANN fundamentally cannot think, because it lacks magical gooey bits. Ergo, your assertion is irrelevant.
So you admit bio NNs are completelt different in structure and function to ANNs. Glad we got there in the end. Btw - what "base model"? Do post a url explaining that. Take your time.
Many mathematically equivalent things are different in structure and function.
I'm wondering if you even completed
Re: Are you serious? (Score:2)
It's a general rule of thumb that the longer the reply the most drivel the poster is writing. Well done. Re einstein - whooosh. Nevermind.
Re: (Score:2)
Which is funny. LLMs are smarter than you, even without the gooey bits.
There's no whoosh with Einstein- you asserted that E=mc^2 was a novel thought. It was not. It was a straight derivation- one even an LLM could make under your own rules... which is kind of the point- your rules are inconsistent, essentially proving that you cannot possibly be fit to prove what goes on within an LLM, if we were to take Gödel's First Incomplete
Re: (Score:2)
Using your logic no human has ever had an original thought, its all based on what went before. In which case I'd love to know how quantum theory or relativity is derived from banging stones on a rock because that's the logical progression you're implying. But don't worry about it, its all a bit beyond you, thats obvious. Have a nice day, I'm done.
Re: (Score:2)
Using your logic no human has ever had an original thought, its all based on what went before.
All available evidence demonstrates clearly that the brain is a finite state machine- it's deterministic.
Why the fuck do you think progression is exponential?
Your postulate would have cave men deriving mass energy equivalence, lol.
In which case I'd love to know how quantum theory or relativity is derived from banging stones on a rock because that's the logical progression you're implying.
You're a moron.
... ignore the proceeding
You bang stones on a rock, and you notice that it sparks.
You notice that the spark can cause fire.
You notice that fire causes things to glow.
Your strawman just further outlines your stupidity.
You're like:
Cavemen bang rocks together.
Re: (Score:2)
Educate yourself, dumbshit. [wikipedia.org]
Re: (Score:2)
AlphaFold isn't really AI, even if Google says it is. It's protein folding.
Rubbish. It is AI-directed protein folding, with demonstrably superior results to previous methods.
Re: (Score:2)
Partly right. The article does mention that they now use generative AI (aka LLM) to search their databases for unexpected phenomena. A good use for it, e.g., it doesn't much matter if it hallucinates, provided the hallucinations are statistically infrequent. Unlike the situation where GPT hallucinates a legal argument...
Re: (Score:2)
LLMs may be generative AI, but generative AI isn't LLM. Generative AI is a much larger set of programs, which includes LLMs as a small subset.
Re:Conned (Score:5, Interesting)
This new Gen AI is not designed to do "measuring"
We do not use LLMs which, as you point out, are utterly unsuited for quantitative measurements. Instead we use machine learning techniques such as graph neural networks to identify signal patterns in the dataset using training based on either simulation or data while rejecting backgrounds. ML techniques can be incredibly powerful in terms of signal to background rejection but it's easy for things to go wrong if you e.g. accidentally include something in your simulation that is not in the data since the algorithm is really good at playing "spot the difference" and has no clue whether that difference is due to something real and physical or is just an irrelevant artifact.
As example of this is I've seen is when the run numbers for signal and background simulation runs were different (they were based on the ID of the physics process being simulated) and the algorithm spotted it and suddenly became insanely good at separating signal from background!
Re: (Score:2)
We do not use LLMs which, as you point out, are utterly unsuited for quantitative measurements.
That's an interesting question, actually. You could use GPT to search for new and better quantitative measurement models. I would go so far as to claim that this will soon be common, just as CERN currently uses it to search for anomalous results.
Re: (Score:2)
Re: (Score:2)
True, but neural nets and machine learning aren't new. We've had this stuff for quite some time. We were doing face recognition in the early 90s. What's different today is that AI is suddenly fasionable, and distinctions aren't being made between old technology being reused for new ideas (ML, protein folding) and new AI being used for ways it wasn't trained on (GPT), lumped together under the same "Hey we're an AI company!" marketing.
For science though, I see a drawback in that neural networks, and the hig
Re: (Score:2)
For science though, I see a drawback in that neural networks, and the higher level models combining them, aren't precise.
Actually that does not matter so long as we can measure the precision. Particle physics deals with relativistic quantum mechanical processes which are inherently random so, even at a fundamental level, we are dealing with statistical distributions. Layer on top of that detector response and even without ML we used to use frequentist and now Bayesian statistical analysis to calculate probabilites of the observed data being consistent with a given physics model. In fact the boosted decision tree ML method pr
Looking for a needle in a hundred haystacks (Score:2)
Everybody at CERN today is vastly more intelligent than the happy chat bot who wants to be your friend. And yet they think this toy is going to figure things out? If the math is too hard then use Mathematica, don't rely upon an AI that doesn't know what true or false means.
If it were generative "AI" being used to find new physics, I'd wholeheartedly agree here, but the use described in the article is to look at the events recorded by the collider's detectors, with 40 collisions per second, the vast majority of which are completely uninteresting, and sort out the very small fraction of a percentage that ARE interesting enough for a physicist to look at.
This is the kind of pattern matching that software can be made good at. The old way to do it was to hire undergraduates ("scan
Re: (Score:2)
Particle Physics has been using machine learning since before most of the world had ever heard the term. And no, that doesn't mean LLMs, which you seem to be quoting. A decade ago now one of my experiments (the NOvA neutrinoi experiment) applied CVNs to help with pattern recognition on neutrino events, and our sensitivity improved on the same dataset by 17%. Because it turns out such things are really rather better at pattern recognition than the first-principles algorithms all our clever people had come
Only with a lot of data and a clear goal (Score:5, Interesting)
Advances made by AI like Alphafold have two things: a lot of data and a clear goal.
Now, stuff like the LHC generates a shitload of data in one go but for the AI, you'll need many instances, so that means repeating experiments possibly hundreds or thousands of times. If that's viable then it's OK.
However, the most important restriction is you need to create a clear and definable goal for the AI. AI doesn't think, so if you don't know what your data means then it could be a problem that AI isn't suited for.
Despite all this, this is one of the few fields where I think Ai could be highly useful in making advances.
Re:Only with a lot of data and a clear goal (Score:5, Interesting)
As someone who as a physics undergraduate worked on large collider experiments I very much believe that this could be a place where AI could be incredibly useful. Even as an undergraduate I was put to work trying to sift through "rejected" data from particle accelerators. I wrote shit code, at the time learned the basics of scientific Linux, and ultimate probably accomplished nothing. The amount of reject data is vast.
With an AI that is basically a pattern matching on steroids I can believe science can be massively sped up. Very few undergrads know what they're looking at, even most grad students. Right now experienced researchers are pointing less senior people in the right direction and hoping someone comes up with a result in some number of years. Most graduate researchers, even at top schools, are learning their way with an advisor as a north star. AI can do what all those students are doing trivially, and in so doing massively accelerate science. It will also basically destroy the talent pipeline as conceived. I'm not sure if that's a good thing long term, but short term I do expect it to massively boost the rate of discovery.
Re: (Score:2)
As someone who as a physics undergraduate worked on large collider experiments I very much believe that this could be a place where AI could be incredibly useful. Even as an undergraduate I was put to work trying to sift through "rejected" data from particle accelerators. I wrote shit code, at the time learned the basics of scientific Linux, and ultimate probably accomplished nothing. The amount of reject data is vast.
With an AI that is basically a pattern matching on steroids I can believe science can be massively sped up. Very few undergrads know what they're looking at, even most grad students. Right now experienced researchers are pointing less senior people in the right direction and hoping someone comes up with a result in some number of years. Most graduate researchers, even at top schools, are learning their way with an advisor as a north star. AI can do what all those students are doing trivially, and in so doing massively accelerate science. It will also basically destroy the talent pipeline as conceived. I'm not sure if that's a good thing long term, but short term I do expect it to massively boost the rate of discovery.
I think you are partially right. I feel like AI does a great job of identifying what the relationships are in the data, which will reduce the need for humans to filter through massive quantities of data for subtle relationships. I don't think the AI is so good at the why of the relationships, which will still require experts in the field to come up with the reasons for the relationships in the data.
Now one thing I was thinking as I was writing this is that the talent pipeline you mention may be the people
Re: (Score:3)
There was a whole lot written about this alphafold, but so far there have been no results to write home about.
The "applications" section in wikipedia is thinner than the resume of a master's candidate and a master's candidate costs a lot less.
Re:Only with a lot of data and a clear goal (Score:4, Informative)
The results are that it predicts protein folding better than any other known system, by a large margin.
The database that has been produced by it is almost certainly in use in just about every biochemistry lab on the planet.
Re: (Score:3)
so that means repeating experiments possibly hundreds or thousands of times
The "experiment" in the LHC is colliding protons and the LHC does that at a rate that's the best part of a billion times a second - it collides bunches of protons 40 million times a second and the luminosity is such that each bunch collision produce multiple proton-proton collisions. While the vast majority of these are strong interaction (QCD) events with little to no physics interest (unless you are a QCD person), there are still many millions of events with interesting physics in them.
Typically the b
Re: (Score:3)
One of the problems that CERN has is that it generates HUGE amounts of data. Something like 99.9% of the data ATLAS generates is thrown away in the FPGAs.
A few of these thrown away events are kept so that they can be reviewed to ensure that it really is uninteresting collisions.
I can easily see how AI could be used to improve this process.
Re: (Score:2)
No need for a clear goal. Many AI methods have a large focus on pattern matching and clustering.
Now let's say you take all these data points and just run an unsupervised clustering on them. Afterward a human picks out a few examples from each cluster and looks what they have in common. Already what kind of data is clustered may be a scientific discovery (X correlates with Y), but one may also find interesting starting points for further research once one has brought a bit of order into the huge pile of data
Innate Primal Urge (Score:2)
Even cavemen had the urge to bang rocks together.
These physicists are no different, they just have bigger boom sticks.
--
You know, I just do whatever feels right to me! - Bruno Mars
Re: (Score:2)
One other difference is that before banging rocks together, we're usually expected to submit a formal paper discussing what we want to bang, how we want to bang it, what pieces we expect to see and why.
Oh, and the stones are much, much smaller although in terms of mass equivalence we're slowly getting to the size of a pebble.
Re: (Score:2)
I can think of at least one other industry where that process is very common....
I expect AI to help but.. (Score:3, Interesting)
tldr; just from the summary this guy is spouting so many snake oily things I don't trust him. We can produce dark matter, We can do 20 times better in 10 years so we advanced 20 years, If you have a big complex problem and throw a complex technique at it you're gonna win, etc. It sounds like little sound bites greatly distorting some kernel of truth that can't get past the oil he has to coat it with to get the people with the money to listen, unfortunately. I expect some kind of AI, or machine learning, already is helping a lot and likely will help a lot more. Maybe as transformative as he suggests, or at least reducing grunt work. But it comes out of the was smelling like "we're also on the cutting edge so give us your money". It's wierd.
Finally, (Score:1)
...we solved Fermi's Para ^~7 #& ` NO CARRIER
This is the proper use of AI (Score:2)
...not the current crop of crap generators
Re: (Score:1)
What do you mean, "nobody knows how it works?" Many people know how it works, that's why you have a device that is based on the principles of quantum mechanics at your fingertips that you used to type your drivel.
Don't project your ignorance on everyone else, please.
Re: (Score:2)
Sure, we can be technical and say, "we know precisely how it works- it calculations trillions of sigmoid functions for matrices that simulate trillions of things we call perceptrons that analogous to neurons."
But ultimately- the function that is encoded within those simulated neural networks is indeed, for all practical purposes, unknowable.
The extent that we "understand what they do", is that they "are governed by the Universal Approximation Theo
Re: (Score:2)
We know precisely how quantum mechanics works, because we have a formal description of its rules and we understand the logic behind them as a description of certain physical phenomena. Which is quite unlike the situation in your analogy.
Here's a simple lecture to help you understand the source of your confusion:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
We know precisely how quantum mechanics works, because we have a formal description of its rules and we understand the logic behind them as a description of certain physical phenomena.
Now that's a laughable claim. [wikipedia.org]
Which is quite unlike the situation in your analogy.
Who cares? It's irrelevant to my analogy.
Here's a simple lecture to help you understand the source of your confusion:
There's no confusion on my part- you're trying to do exactly what I said- by saying we understand the discrete operations, we understand the whole- which is fucking absurd.
Please, do solve the vacuum catastrophe for me- after all, we understand perfectly how quantum mechanics works.
Re: (Score:2)
We know precisely how quantum mechanics works
All of the leading lights of QM disagree with you. In fact, the inability to know precisely how it works and having to approach it statistically is one of the defining characteristics of QM.
How it works [Re:Sounds familiar] (Score:2)
What do you mean, "nobody knows how it works?" Many people know how it works
Saying one "doesn't know how that works" isn't ignorance.
We know precisely how quantum mechanics works, because we have a formal description of its rules and we understand the logic behind them as a description of certain physical phenomena.
If the sentence had been phrased "nobody understands quantum mechanics", it would have been reasonably argued.
in fact, yes, we know how quantum mechanics works. It is a set of equations and computational tools which, if used according to the rules, will yield answers to well-phrased experimental questions in the form of probabilities, and these computational tools have been exhaustively verified by experiment.
The question of whether people understand quantum mechanics is a bit harder to answer, since it dep
Re: (Score:2)
What does this have to do with the post I'm replying to remains a mystery.
Alternatively... (Score:2)
Sabine Hossenfelder video on this coming in... (Score:2)