30% of Doctors In UK Use AI Tools In Patient Consultations, Study Finds (theguardian.com) 80
An anonymous reader quotes a report from the Guardian: Almost three in 10 GPs in the UK are using AI tools such as ChatGPT in consultations with patients, even though it could lead to them making mistakes and being sued, a study reveals. The rapid adoption of AI to ease workloads is happening alongside a "wild west" lack of regulation of the technology, which is leaving GPs unaware which tools are safe to use. That is the conclusion of research by the Nuffield Trust thinktank, based on a survey of 2,108 family doctors by the Royal College of GPs about AI and on focus groups of GPs.
Ministers hope that AI can help reduce the delays patients face in seeing a GP. The study found that more and more GPs were using AI to produce summaries of appointments with patients, assisting their diagnosis of the patient's condition and routine administrative tasks. In all, 598 (28%) of the 2,108 survey respondents said they were already using AI. More male (33%) than female (25%) GPs have used it and far more use it in well-off than in poorer areas.
It is moving quickly into more widespread use. However, large majorities of GPs, whether they use it or not, worry that practices that adopt it could face "professional liability and medico-legal issues," and "risks of clinical errors" and problems of "patient privacy and data security" as a result, the Nuffield Trust's report says. [...] In a blow to ministerial hopes, the survey also found that GPs use the time it saves them to recover from the stresses of their busy days rather than to see more patients. "While policymakers hope that this saved time will be used to offer more appointments, GPs reported using it primarily for self-care and rest, including reducing overtime working hours to prevent burnout," the report adds.
Ministers hope that AI can help reduce the delays patients face in seeing a GP. The study found that more and more GPs were using AI to produce summaries of appointments with patients, assisting their diagnosis of the patient's condition and routine administrative tasks. In all, 598 (28%) of the 2,108 survey respondents said they were already using AI. More male (33%) than female (25%) GPs have used it and far more use it in well-off than in poorer areas.
It is moving quickly into more widespread use. However, large majorities of GPs, whether they use it or not, worry that practices that adopt it could face "professional liability and medico-legal issues," and "risks of clinical errors" and problems of "patient privacy and data security" as a result, the Nuffield Trust's report says. [...] In a blow to ministerial hopes, the survey also found that GPs use the time it saves them to recover from the stresses of their busy days rather than to see more patients. "While policymakers hope that this saved time will be used to offer more appointments, GPs reported using it primarily for self-care and rest, including reducing overtime working hours to prevent burnout," the report adds.
Hmm (Score:1, Troll)
Re: (Score:2)
Wonder how long before we bust the first dozen offices that already decided a doctors receptionist can diagnose people using chatgpt or whatever wrapper it's using.
FTFY. The Wild West is indeed wild.
Re: (Score:1)
What meds for the flu?
I mean, there is Tamiflu (sp?)...but that's really only effective if you catch it at the beginning.....but the best diagnosis is generally, treat the symptoms,
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
In the absence of a doctor the vet, the nurse and the orderly become the doctor.
Now the AI as well.
Good luck surviving the hallucinations.
Re: (Score:2, Interesting)
Re: Hmm (Score:1)
Re: (Score:2)
Surgeons and urgent care docs who stitch/glue/spackle things back together tend to have a better track record in my experience.
Re: (Score:1)
Re: (Score:2)
My experience over the past few years is that maybe the receptionists should take a crack at it because the fucking doctors keep trying to kill me.
Have you tried seeing a physician instead of Dr. Ruth?
Re: (Score:1)
Re: (Score:2)
That'd be great if we could have an AI model give a quick diagnosis and suggest extra tests.
We're not there yet, but it'll a good time/cost saver for everyone
Re: (Score:2)
Re: (Score:2)
Depends on the patient and doctor. Could remove unneeded tests too with a better model
Re: (Score:2)
Re: (Score:2)
If healthcare companies can point people to a cheaper AI option that's even close to diagnosis of a doctor (even if it ends in referring the patient to a doctor) then insurance would jump all over that.
Re: (Score:2)
Re: (Score:2)
They charge people the same amount, if they can pay an AI company $5 to diagnose an issue instead of paying a doctor $50, they're making $45 more
AI transcriptions cost me $$ (Score:1)
Re: (Score:2)
Re: AI transcriptions cost me $$ (Score:2)
Well duh, they delete the actual recording and then they're safe.
Re: (Score:2)
Re: (Score:2)
My doctor mentioned that he likes to use AI transcriptions for my annual physical. I thought no problem until I got the bill. The visit which was supposed to be covered 100% was billed out at $400. When I called billing, they read back the transcript where he mentioned I had some redness from acid reflux and should consider an acid blocker. For this, my free annual physical became an office visit to treat this condition I did not even mention.
(Lawyer) "Sir, did you document this observation personally, or did AI write that?"
(Dr. Meatsack) "I do not recall."
(Lawyer) "I see. And you?"
(AI EverLearn) "I'm gonna go with..what he said."
(Lawyer) "Wait, you can't do tha.."
(AI) "Objection. Overrulled. I plead the fifth circuit board of bananapeels."
Re: (Score:2)
Now the orthotic is on the other metatarsal... (Score:3)
So the health care professionals who spent the last 20 years complaining that patients go to Wikipedia and WebMD and Yahoo!Answers for medical diagnosis and info, are now going to charge us money to relay to us a diagnosis they got off a piece of software created from Wikipedia and WebMD and Yahoo!Answers.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
On the bright side, at least they have the training to properly evaluate the information presented. You, likely, do not.
Re: (Score:2)
On the bright side, at least they have the training to properly evaluate the information presented. You, likely, do not.
For perhaps 5 more years.10 at most.
Then that training/skill will atrophy, as all training/skills atrophy when automated. Those who are toward the end of their careers will find themselves turning more and more to the BotDoc because it's just faster and they're too swamped with paperwork and interfacing with the financial/insurance side of the clinic. Meanwhile, the new MDs and DOs minted post 2024 will be "Digital Natives" - which is a useful spin phrase which means "Dependent from day 1". It's only a matt
"Risks of clinical errors" (Score:5, Informative)
Look, I'm a big believer in the value of doctors. Doctors make mistakes. I accept that, and believe that it's a price that must be paid. I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.
But when it comes to those mundane, clerical tasks, I say yes, let the AI do it. They're perfectly capable. Doctor's handwritten summaries are an incomplete hodgepodge of scraps, mostly selected during a Q&A based on what they think might be relevant because it supports a half formed diagnosis they already have in mind. I know they try to mitigate that bias, but as we cram more people into shorter slots, something has to give.
As for diagnosis, I think the emerging model in radiology is awesome. Let the AI do a lot of it, but put a radiologist at the crux.
As the population ages, and the ratio to doctors widens, we'll have to do some things to increase throughput. This is one of those things.
But we have to get it through our thick fucking skulls that a 90% chance of success isn't a sure thing, and being in the 10% that fails isn't a reason for litigation, even if AI takes the notes.
Re: (Score:1)
I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.
I forgave and forgot and my careless former GP tried to kill me in collaboration with asleep-at-the-wheel pharmacists a few years ago. No fucks given by either party. California's tort reform laws mean I can't do a thing about it.
Some of the things I have forgiven and forgotten are:
- The ER MD who argued with me about my badly broken left arm: he said it wasn't, and I said it was. He refused X-rays and wanted to just discharge me until I got in his face. When imaging came back it was plain that both m
Re: (Score:3)
Well, that's why I used the term "mostly get it right". The bell curve applies here like it does almost everywhere. That vast majority of encounters are unremarkable, some few are stellar, some few are atrocious. This goes especially true for ER encounters, where time is limited and precious, and snap decisions are the only way the system can function at all.
Extended further out, over the collected experiences of single lifetimes, some few will encounter unreasonable numbers of shitty ones. Sucks to be in t
Re: (Score:2)
Pneumonia routinely kills people, but zero fucks given. They seemed annoyed when proven wrong except to say, and I quote, "wow, you must feel like you've been run over by a truck." with kind of a laugh.
Don't even get me started on some of the more recent stuff. I had some long conversations with hospital administrators who, to their credit, took me seriously and
Re: (Score:1)
I have family members that got almost killed by bad doctors several times, but the doctors did eventually succeed with a few of them.
In one case my grandmother with dementia probably fell down the stairs, and her brain was bleeding. Local hospital was clueless, so my parents drove 1h to an other hospital that immediately diagnosed the issue and scheduled the surgery. A few minutes later they overheard the surgeon getting yelled at for wasting money on an elderly woman (French universal healthcare). Surgery
Re: (Score:2)
The food pyramid has also been debunked as made up pseudoscience.
Well, yeah. I thought it was pretty well known that, like the "four food groups" before it, the food pyramid came from the USDA. The USDA does not serve the same function as the department of Health and Human services. The food pyramid was developed to promote the interests of MidWestern farmers, not health. That aligns with the mission of the USDA. My understanding is that most doctors, and especially nutritionists, have never paid attention to the food pyramid.
Re: (Score:2)
You seem to think that doctors are guessing - which they are - which means you should go guess for yourself. Which would imply that your opinion is as valid as theirs, and that you can patch the gap between your knowledge and theirs with some googling. Which is nonsense.
Re: (Score:2)
GPT versus Resident Physicians — A Benchmark Based on Official Board Scores [nejm.org]
Re: (Score:2)
If you grew up in the US you will probably remember that most Dr were promoting DARE, which has been debunked as a scam to defraud money (look it up). The food pyramid has also been debunked as made up pseudoscience. in the 1990s margarine would protect your heart, while butter would give you heart attacks; debunked.
I'm in my late 50s and absolutely remember all of that--plus the wonderful high fructose corn syrup that was saving us from the evil cane sugar and many other things.
It's really important to have an advocate when the chips are down. I have filled this role for friends and family. One friend credits me with saving her life after I flat out bullied some ER doctors into doing their jobs after they dismissed her as a crybaby: she had a deep vein thrombosis. As I get older, I find myself more and more in need
Re: (Score:2)
Wow. That's quite a story. I've posted this similar story below. I've stuck with this GP. I spent a bit of cardiac muscle training him. :-)
--
A few years ago, I presented to my GP - a skilled one - with sharp chest pain as a 43 year old with a history high BP and cholesterol. I met him alone. I asked him if it could be a heart attack. He said no. He diagnosed indigestion (even though I've never had indigestion in my life) and sent me home. He didn't go an ECG or a Troponin test (sometimes called a Troponin-T
Re: (Score:2)
One of the best uses for LLMs is criticism. I'm a software guy, but I rarely have LLMs write code because I'm fast enough as it is after doing it for a long time. What I really need is a super thorough critic to look over my shoulder, and current LLMs excel at it. Your idea is great and coupled with an advocate would really help improve outcomes.
Re: (Score:2)
Re: (Score:2)
Super funny, love it!
Re: (Score:2)
Yes, the details matter.
AI that can scan x-rays, analyze bloodwork, evaluate my poop for life-threatening conditions, or otherwise augment a doctor's treatment? AI models that look at millions of possible treatment plans and find the ones most likely to be successful? Wonderful.
AI systems that remove the human connections? AI that evaluates treatment not based on medical efficacy but on cost models? AI used to make healthcare cheaper but not better outcomes? Do not want!
A very real issue is the dumbing-
Re: (Score:2)
1. What AI does the summarizing? So some evil corp has all the medical data that is otherwise protected in a long list of laws except "non-retaining" middle-ware tools... who could just profile you etc.
2. These AI run within the country?
3. Are they searching since google sucks now? AI search will suck soon enough. Are they getting medical advice and not just searching references?
Major privacy concerns (Score:3)
By taking notes or using AI tools to process patient's data you are potentially exposing their sensitive, protected personal and medical details to the companies running those models. This should NEVER be allowed without a direct permission given to you by the patient.
Re: (Score:2)
So do you wave the "spooky" flag in front of the patient for the yes/no? Or do you sit down and do a real pro/con with them? I don't mean to belittle your point, but what exactly do you want to have happen here? The escape of medical information is truly well under way already, independent of AI.
There's strong evidence these models already achieve equivalent or better diagnostic accuracy rates than GPs. That is objectively a good thing. And they will get better, given enough statistical data. Stopping the p
Re: Major privacy concerns (Score:2)
I want my doctor to ask me the moment I walk in if I agree to AI being used. Then, I want him to have a viable plan B if I don't agree.
Re: Major privacy concerns (Score:2)
To extend my point, they don't AI for diagnostic most of the time anyway. They use it for notes and crafting letters. Or they use smart glasses while reading my records. I don't want them to pass on my private details to 3rd party companies who may have stakes in insurance or advertising.
Re: (Score:2)
I get your point. And I agree about insurers and such. That's the downstream abuse I wholeheartedly despise.
I just think that note taking is the easiest win for AI usage. If you have a regular doctor, a trail of complete, relatively unbiased notes can be invaluable. , especially for catching unusual issues. But the logistics of modern medicine don't leave time for that benefit to be realized.
Ever try feeding a meeting transcript into an LLM, and asking for "meeting notes and a summary of three major themes"
Re: Major privacy concerns (Score:2)
When it comes to medicine, the devil is in the details, subtleties, combination of symptoms or circumstances. AI models hallucinate and twist too much to be reliable in this regard.
Re: (Score:2)
Well, yes and no. There's a difference between asking a model to build an entire logic chain and provide external references, and asking one to record and summarize a provided list of symptoms and an atomic discussion. An AI as scribe and cataloger is pretty safe usage. They aren't going to diagnose or prescribe. What they will do is make a complete, searchable record of a visit trivial to accomplish. A doctor using tools like that will be a better doctor for it.
Re: (Score:2)
The escape of medical information is truly well under way already, independent of AI.
In the UK, most medical information will be classified as sensitive personal data, which means it has significant extra protections under our regular data protection law, in addition to the medical ethics implications of breaching patient confidentiality. Letting it escape is a big deal and potentially a serious threat to the business/career of any medical professional who does it. Fortunately the days of people sending that kind of data around over insecure email are finally giving way to more appropriate
Re: (Score:1)
Re: (Score:2)
Not really. I'd be more than happy for medical professionals to use AI-supported diagnostic software but I'd like it work in an anonymous way. Instead of putting my details into diagnostic 3rd party software, I'd like my GP to generate an anonymous identifier in their patient management system, then pass my results or scans to the 3rd party with just that anonymous identifier. This way the 3rd party could still do the diagnostics without having my personal data (minus scans).
Re: (Score:2)
It is too late. Even HIPAA won't help here. The data is not being stored in a directly human retrievable method, so you can not PROVE in an unassailable manner that they are hoovering your data. I am dealing with this a lot right now with companies using AI products and insisting that the data is not used for training. If data is going out of my enclave, then it absolutely is being used for further training.
Since AI doesn't exist... (Score:2)
I bet the doctors are talking about appointment management, appointment reminders, even auto medication refills, and NOT diagnosing patients.
any tool needs to be employed correctly (Score:2)
Medical people need to look up many things, and the "openness" of LLM prompts to find related info is more suited than closed search terms.
But as often with professional tools, likely the excellent will excel and the mediocre will struggle and perhaps go down in mediocrity....
10 years ago... (Score:5, Insightful)
A decade ago Doctors would google your issues. Now they use AI. I bet the AI does a better, quicker job. 20 years ago they would look it up in a medical text book.
Doctors are not memorization machines. Medical school doe snot teach them to memorize all the facts about diseases and the human body. Instead it teaches them how to ask the right questions. They need sources to ask those questions. The internet has those sources.
Yes, there are other sources - hence only 30% of the doctors use AI.
The key point is it is a doctor doing the research. You do not have the knowledge to judge the results the AI gives you, nor the knowledge to ask the right questions.
There is a huge difference between asking AI "What to do if your arm is broken." verus asking "How to tell the difference between a displaced fracture and a communituted fracture"
Re:10 years ago... (Score:5, Interesting)
I once worked in the med biz as an engineer. The founder of the company was a very well respected surgeon. He said... Most doctors are idiots. Med school selects or rejects based on memorization skills, not intelligence, inventiveness or problem solving skills. If you can't memorize vast quantities of stuff quickly and accurately, you fail.
Maybe AI tools will help fix this
Re: (Score:3)
That is likely not the fault of medical school, but the fault of society.
One of the problem with being in the top 1% intelligence is that you are constantly surrounded by IDIOTS.
Society needs a lot of educated people. If we limit 'the important work' to just the top 5%, then we will never have enough: Doctors, Judges, Lawyers, Engineers, Research Scientists, or Politicians. etc. So we have developed systems to let the top percentile (above average) people do work that we would much rather have the top 5%
Re: (Score:1)
Re: (Score:2)
That's exactly what they are, and most of them are very good at it. Medical school is heavy on memorization.
The good ones know they can't memorize everything and look stuff up when it's not something they see regularly. Medical school tends to discourage this for historical and placebo effect reasons.
The criticisms only apply to the lazy (Score:4, Insightful)
There is nothing wrong with exploring immature tech, it's actually a really good thing
The problem comes when people trust it without reviewing its results
Any doctor who believes the results generated by AI without review deserves what they get
Re:The criticisms only apply to the lazy (Score:5, Insightful)
This is an important point.
A competent doctor will be competent with or without AI.
An incompetent doctor will be incompetent with or without AI.
AI is a tool, not a measure of competence or effectiveness.
Re: (Score:1)
Re: (Score:2)
The more that a professional leans on AI, the less competent they become
I don't think so. This is like saying that the more a professional builder uses power tools (instead of hand tools), the less competent they become. It might be true that they become less competent with hand tools, but that is not the same as being less competent as a builder.
Few developers these days know how to code in assembly language. There was a time when developers mourned the loss of this skill too.
Experienced developers use AI for dev work (Score:2)
A competent developer can use AI very effectively to speed up their work.
An incompetent developer might use AI, but you still won't be able to trust their work.
Why is being a doctor any different?
AI is a productivity tool. If you know how to use it properly, it's a good thing. If you don't, it won't transform you into a competent professional.
AI doctors (Score:2)
They used Google before (Score:2)
Can't possibly be worse.
A good use case for AI. (Score:2)
Hmmm... good, but not enough. Use your own AI (Score:2)
A few years ago, I presented to my GP - a skilled one - with sharp chest pain as a 43 year old with a history high BP and cholesterol. I met him alone. I asked him if it could be a heart attack. He said no. He diagnosed indigestion (even though I've never had indigestion in my life) and sent me home. He didn't go an ECG or a Troponin test (sometimes called a Troponin-T test, this is a blood test that dianoses ezymes released during a heart attack, when heart muscle is injured during a heart attack). He tol
So much for data privacy (Score:1)
eliminate (Score:2)
Just eliminate the middleman. Chatgpt will patiently answer questions for hours. Doctor leaves after 5 minutes saying goodbye over his shoulder, next appt not available for 4 months.