Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds (www.cbc.ca) 30
A large Swedish study of 100,000 women found that using AI to assist radiologists reading mammograms reduced the rate of aggressive "interval" breast cancers by 12%. CBC News reports: For the study -- published in Thursday's issue of the medical journal The Lancet -- more than 100,000 women had mammography screenings. Half were supported by AI and the rest had their mammograms reviewed by two different radiologists, a standard practice in much of Europe known as double reading. It is not typically used in Canada, where usually one radiologist checks mammograms.
The study looked at the rates of interval cancer, the term doctors use for invasive tumors that appear between routine mammograms. They can be harder to detect and studies have shown that they are more likely to be aggressive with a poorer prognosis. The rate of interval cancers decreased by 12 percent in the groups where the AI screening was implemented, the study showed. [...] Throughout the two-year study, the mammograms that were supported by AI were triaged into two different groups. Those that were determined to be low risk needed only one radiologist to examine them, while those that were considered high risk required two. The researchers reported that numerically, the AI-supported screening resulted in 11 fewer interval cancers than standard screening (82 versus 93, or 12 per cent).
"This is really a way to improve an overall screening test," [said lead author, Dr. Kristina Lang]. She acknowledged that while the study found a decrease in interval cancer, longer-term studies are needed to find out how AI-supported screening might impact mortality rates. The screenings for the study all took place at one centre in Sweden, which the researchers acknowledged is a limitation. Another is that the race and ethnicity of the participants were not recorded. The next step, Lang said, will be for Swedish researchers to determine cost-effectiveness.
The study looked at the rates of interval cancer, the term doctors use for invasive tumors that appear between routine mammograms. They can be harder to detect and studies have shown that they are more likely to be aggressive with a poorer prognosis. The rate of interval cancers decreased by 12 percent in the groups where the AI screening was implemented, the study showed. [...] Throughout the two-year study, the mammograms that were supported by AI were triaged into two different groups. Those that were determined to be low risk needed only one radiologist to examine them, while those that were considered high risk required two. The researchers reported that numerically, the AI-supported screening resulted in 11 fewer interval cancers than standard screening (82 versus 93, or 12 per cent).
"This is really a way to improve an overall screening test," [said lead author, Dr. Kristina Lang]. She acknowledged that while the study found a decrease in interval cancer, longer-term studies are needed to find out how AI-supported screening might impact mortality rates. The screenings for the study all took place at one centre in Sweden, which the researchers acknowledged is a limitation. Another is that the race and ethnicity of the participants were not recorded. The next step, Lang said, will be for Swedish researchers to determine cost-effectiveness.
Excellent use of AI (Score:2)
Finally a use for AI that actually improves people's lifes. We need more of this and less slop and deep fakes..
Re: (Score:2)
Absolutely. Pattern-matching AIs that can do this sort of thing are fantastic, and are far more useful and ethical than GPTs that generate probabilistic garbage based on theft of creatives' works.
Re: (Score:2)
I know the algorithms have gotten better but years ago they had a high accuracy AI because it had been trained to look for a ruler that happened to be in most of the pictures that had the cancer from the data set.
I suspect like a lot of things it's going to become haves and have nots where if you still have money you can pay someone to check the AI and if you don't you just have
NNs are classifiers (Score:3)
Using them for linear separation of states in order to classify results is precisely how they should be used.
Re: (Score:2)
Classification is but one of the many things they excel at.
Glad this is progressing... (Score:2)
I remember one of the uses for image filters back in the 1990s when I was in college was to take X-rays and use different image filter types to help more easily find calcifications, tumors, and other "sus" items. I'm glad this is moving along, because this is where AI can be extremely helpful. Worst case, a biopsy happens and some benign lesions are removed.
Please call it Machine Learning (Score:4, Insightful)
This is not AI, nor is it an LLM. This is just a solid machine learning model put to a good use. Calling it AI gives credit to the LLM grifters and hype machine.
Re: Please call it Machine Learning (Score:3)
Re: (Score:2)
It's actually neither LLM nor machine learning.
An LLM is a subset of AI in general, that is language-focused and can interpret loose prompts from humans, and do something logical with those prompts.
Machine learning does not require AI at all, but may employ AI.
This technology is actually AI, but not LLM. While LLMs deal with patterns of tokens, this AI deals with patterns of pixels. It's been trained to spot cancer on images, and uses that training to help generate a diagnosis or at least a recommendation t
Re: (Score:3)
It's actually neither LLM nor machine learning.
Wrong. It is ML.
An LLM is a subset of AI in general, that is language-focused and can interpret loose prompts from humans, and do something logical with those prompts.
It's roughly (these days) a synonym for a GPT (merely because no other LM really exists anymore since the performance disparity is astronomical)
But correct- you could call it a subset.
Machine learning does not require AI at all, but may employ AI.
Machine learning is a subset of AI. Particularly these days, where if you're not using a large transformer network, you're simply making an inferior product.
This technology is actually AI, but not LLM. While LLMs deal with patterns of tokens, this AI deals with patterns of pixels. It's been trained to spot cancer on images, and uses that training to help generate a diagnosis or at least a recommendation to a doctor to look more closely. It is, in both the technical sense and in laymen's terms, true AI.
You may be surprised to learn that LLMs can be trained to read "patterns of pixels" as well.
That is not the relevant part.
Re: (Score:2)
Machine learning can be, and often is, done through algorithms that are not related to AI. For example, standard statistical models are a type of traditional machine learning. A SQL database is one such piece of software: it is constantly taking statistical samples to "learn" what kind of data is in its tables, to make queries more efficient. No one would say that SQL statistical modeling is "AI".
When LLMs read patterns of pixels, they are not really any longer acting as an LLM. Another example is that LLM
Re: (Score:2)
Machine learning can be, and often is, done through algorithms that are not related to AI.
No.
No matter what form of universal approximator you're using, if you're training it mathematically off of data with a machine, it is machine learning, and machine learning is a type of artificial intelligence.
Further, that isn't really relevant, because all ML is done using transformers or variations of them, now. You'd be insane not to. They're infinitely more scalable.
A SQL database is one such piece of software: it is constantly taking statistical samples to "learn" what kind of data is in its tables, to make queries more efficient. No one would say that SQL statistical modeling is "AI".
That is not machine learning. That is heuristics.
If your SQL DB engine is in fact constructing a statistical model that it trains from
Re: (Score:2)
All I can say is, you're blowing smoke. I'd answer each of your points, but you're not listening, you're too full of your own supposed knowledge.
Re: (Score:2)
There's no smoke being blown. There are only facts, here.
When you feed an image into a multimodal LLM, and it describes what it sees in a high amount of detail, it's no longer acting like an LLM?
I'm sorry- you're being absurd. Since what happens, literally, is the image is fed into a projector model that converts it into embeddings, and those are- indeed- fed directly into the LLM- the same LLM that can also work with tokens that have been converted into embeddings. Y
Re: (Score:2)
OK, so I tried this experiment. I asked AI this simple question:
"What is the volume of a spherical tank 29.5 feet in diameter?"
I compared the results of Copilot, Gemini, and ChatGPT.
All three AIs successfully spit out formulas to perform the calculations, though Copilot added an extra, unneeded last step to multiply by pi after it already had the right answer.
Copilot and Gemini miscalculated 14.75^3, each producing slightly different answers around 3208. Only ChatGPT got it right, 3209.046875.
Copilot came u
Re: (Score:2)
This "estimating" or "computing" is not a function of the LLM itself, but an internal tool or API added by the developers of the larger product, that ChatGPT's LLM can leverage as part of its process.
Incorrect.
There's a lot here to unpack. Your test, I think was done in good faith, given some of the misconceptions you've displayed.
Estimation or computation are merely words it selected for whatever process happened within it that came up with the answer. You cannot infer whether or not it used a tool (that's the technical terminology for the LLM requesting an external function all)
Since you're using the ones you're using, you really have no way to know, unfortunately. But I, having a shit-ton of expe
Re: (Score:2)
ChatGPT does use Python to perform math computations, especially when instructed to do so or when it determines it needs to.
https://newmr.org/blog/python-... [newmr.org]
https://www.datastudios.org/po... [datastudios.org]
Re: (Score:2)
Re: (Score:2)
It's funny, really (Score:2)
The whole reason machine learning called itself machine learning was to distance itself from "AI" which was seen as a waste of money after the last AI winter.
Used in the US for a long time (Score:5, Informative)
This technology has been used in the US, even at very small hospitals, for years. Our local hospital uses iCAD [icadmed.com], and recently "upgraded" from the older on-site processing to the cloud-based version.
It places virtual markers (not actually on the images, but in a separate data file that mammogram viewing software understands) that indicate places of concern (areas of extra density and the like).
2008 (Score:1)
https://www.npr.org/2007/04/04... [npr.org]
Women have also been advised to not get mammograms before they're 30. A lot of women were screaming years ago when that recommendation came out, but the false positive risk was pretty as mammograms are
Another big issue is the biopsy. The more I read about these, the more absolutely fuck
Re: (Score:2)
Technology definitely didn't improve over the last 18 years.
There hasn't at all been a literal revolution in ML involving transformer networks that have gone from other scientific AI applications requiring the computing power of every idle computer in the world to barely solve basic protein folding tasks, to networks finding more protein foldings than ever found before in a matter of months.
Colored skeptical. I think you left some of your stupid showing, though.
Re: (Score:1)
Re: (Score:2)
That's like being skeptical that vaccines exist because you're familiar with bloodletting.
Radiologists won't like where this goes (Score:5, Insightful)
From Cort Doctorow's article on Reverse Centaurs:
But AI can’t do your job. It can help you do your job, but that doesn’t mean it’s going to save anyone money. Take radiology: there’s some evidence that AIs can sometimes identify solid-mass tumors that some radiologists miss, and look, I’ve got cancer. Thankfully, it’s very treatable, but I’ve got an interest in radiology being as reliable and accurate as possible
If my Kaiser hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 x-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 x-rays per day. That’s fine, we just care about finding all those tumors.”
If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if it that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
Re: (Score:2)
Massive improvements in productivity are not well handled by capitalism, unless you define well to mean "enrichment of the owners of that productivity".
What?! (Score:1)
Who the hell wants to catch breast cancer?!
4 to 20g (Score:2)
Article summary is missing a u.. At 4 to 20g you'd be taking a leave of absence from the universe for 48 hours.
lazy bums (Score:2)
The average radiologist spends like 20 seconds per x-ray. That's all you need to know.