Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Medicine United Kingdom

30% of Doctors In UK Use AI Tools In Patient Consultations, Study Finds (theguardian.com) 80

An anonymous reader quotes a report from the Guardian: Almost three in 10 GPs in the UK are using AI tools such as ChatGPT in consultations with patients, even though it could lead to them making mistakes and being sued, a study reveals. The rapid adoption of AI to ease workloads is happening alongside a "wild west" lack of regulation of the technology, which is leaving GPs unaware which tools are safe to use. That is the conclusion of research by the Nuffield Trust thinktank, based on a survey of 2,108 family doctors by the Royal College of GPs about AI and on focus groups of GPs.

Ministers hope that AI can help reduce the delays patients face in seeing a GP. The study found that more and more GPs were using AI to produce summaries of appointments with patients, assisting their diagnosis of the patient's condition and routine administrative tasks. In all, 598 (28%) of the 2,108 survey respondents said they were already using AI. More male (33%) than female (25%) GPs have used it and far more use it in well-off than in poorer areas.

It is moving quickly into more widespread use. However, large majorities of GPs, whether they use it or not, worry that practices that adopt it could face "professional liability and medico-legal issues," and "risks of clinical errors" and problems of "patient privacy and data security" as a result, the Nuffield Trust's report says. [...] In a blow to ministerial hopes, the survey also found that GPs use the time it saves them to recover from the stresses of their busy days rather than to see more patients. "While policymakers hope that this saved time will be used to offer more appointments, GPs reported using it primarily for self-care and rest, including reducing overtime working hours to prevent burnout," the report adds.

30% of Doctors In UK Use AI Tools In Patient Consultations, Study Finds

Comments Filter:
  • Hmm (Score:1, Troll)

    by liqu1d ( 4349325 )
    Wonder how long before they decide a doctors receptionist can diagnose people using chatgpt or whatever wrapper it's using. Horrible idea.
    • Wonder how long before we bust the first dozen offices that already decided a doctors receptionist can diagnose people using chatgpt or whatever wrapper it's using.

      FTFY. The Wild West is indeed wild.

    • In the absence of a doctor the vet, the nurse and the orderly become the doctor.

      Now the AI as well.

      Good luck surviving the hallucinations.

    • Re: (Score:2, Interesting)

      My experience over the past few years is that maybe the receptionists should take a crack at it because the fucking doctors keep trying to kill me.
      • My experience is that medical professionals are almost always wrong at first and will only find the correct diagnosis after multiple attempts. Even when I point at the exact spot, tell them in detail what the problem is they insist on exhausting their ideas first. One time they ordered multiple scans that deliberately ignored the area I mentioned and only figured it out when seeing the issue almost out of frame on the third imaging attempt.
        • My experience parallels yours.

          Surgeons and urgent care docs who stitch/glue/spackle things back together tend to have a better track record in my experience.
          • My experience parallels his too - not mine personally, but my wife's. Thank goodness I can read medical papers, that helped us a lot.
      • My experience over the past few years is that maybe the receptionists should take a crack at it because the fucking doctors keep trying to kill me.

        Have you tried seeing a physician instead of Dr. Ruth?

        • How frequently do you visit a doctor? I've been noting that quality of care has been declining for at least 20 years but it has really accelerated since about 2018.
    • by Ksevio ( 865461 )

      That'd be great if we could have an AI model give a quick diagnosis and suggest extra tests.

      We're not there yet, but it'll a good time/cost saver for everyone

  • My doctor mentioned that he likes to use AI transcriptions for my annual physical. I thought no problem until I got the bill. The visit which was supposed to be covered 100% was billed out at $400. When I called billing, they read back the transcript where he mentioned I had some redness from acid reflux and should consider an acid blocker. For this, my free annual physical became an office visit to treat this condition I did not even mention.
    • by RobinH ( 124750 )
      Everyone who supports AI medical transcriptions says, "of course you still need to proof-read it," but we know there are a lot of physicians and psychologists not proof-reading the transcriptions because stuff like this is getting through. Do doctors not take ethics seriously? They're worried about lawsuits, but not worried about using an unproven technology that's notorious for confabulating?
      • Well duh, they delete the actual recording and then they're safe.

        • by RobinH ( 124750 )
          True. I love how the vendors are selling this as a privacy feature, when in reality it's a CYA feature. They're clearly going to be hit by massive class action lawsuits over this, and they just seem oblivious to it. I guess if there's money to be made now, don't worry about the future. Hire some lawyers.
    • My doctor mentioned that he likes to use AI transcriptions for my annual physical. I thought no problem until I got the bill. The visit which was supposed to be covered 100% was billed out at $400. When I called billing, they read back the transcript where he mentioned I had some redness from acid reflux and should consider an acid blocker. For this, my free annual physical became an office visit to treat this condition I did not even mention.

      (Lawyer) "Sir, did you document this observation personally, or did AI write that?"

      (Dr. Meatsack) "I do not recall."

      (Lawyer) "I see. And you?"

      (AI EverLearn) "I'm gonna go with..what he said."

      (Lawyer) "Wait, you can't do tha.."

      (AI) "Objection. Overrulled. I plead the fifth circuit board of bananapeels."

    • Same happened to me some years ago, no AI needed. Changed doctor. That's called fraud.
  • So the health care professionals who spent the last 20 years complaining that patients go to Wikipedia and WebMD and Yahoo!Answers for medical diagnosis and info, are now going to charge us money to relay to us a diagnosis they got off a piece of software created from Wikipedia and WebMD and Yahoo!Answers.

    • To be clear, the doctors telling you not to Google symptoms have been googling symptoms all along, and if there were any who weren't it's because they're the bad doctors who made no effort to keep up to date after med school.
    • I had a PCP (NP, actually) who was actively googling my symptoms on her ipad while talking to me.
    • On the bright side, at least they have the training to properly evaluate the information presented. You, likely, do not.

      • On the bright side, at least they have the training to properly evaluate the information presented. You, likely, do not.

        For perhaps 5 more years.10 at most.

        Then that training/skill will atrophy, as all training/skills atrophy when automated. Those who are toward the end of their careers will find themselves turning more and more to the BotDoc because it's just faster and they're too swamped with paperwork and interfacing with the financial/insurance side of the clinic. Meanwhile, the new MDs and DOs minted post 2024 will be "Digital Natives" - which is a useful spin phrase which means "Dependent from day 1". It's only a matt

  • by Petersko ( 564140 ) on Thursday December 04, 2025 @01:10PM (#65835635)

    Look, I'm a big believer in the value of doctors. Doctors make mistakes. I accept that, and believe that it's a price that must be paid. I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.

    But when it comes to those mundane, clerical tasks, I say yes, let the AI do it. They're perfectly capable. Doctor's handwritten summaries are an incomplete hodgepodge of scraps, mostly selected during a Q&A based on what they think might be relevant because it supports a half formed diagnosis they already have in mind. I know they try to mitigate that bias, but as we cram more people into shorter slots, something has to give.

    As for diagnosis, I think the emerging model in radiology is awesome. Let the AI do a lot of it, but put a radiologist at the crux.

    As the population ages, and the ratio to doctors widens, we'll have to do some things to increase throughput. This is one of those things.

    But we have to get it through our thick fucking skulls that a 90% chance of success isn't a sure thing, and being in the 10% that fails isn't a reason for litigation, even if AI takes the notes.

    • I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.

      I forgave and forgot and my careless former GP tried to kill me in collaboration with asleep-at-the-wheel pharmacists a few years ago. No fucks given by either party. California's tort reform laws mean I can't do a thing about it.

      Some of the things I have forgiven and forgotten are:

      - The ER MD who argued with me about my badly broken left arm: he said it wasn't, and I said it was. He refused X-rays and wanted to just discharge me until I got in his face. When imaging came back it was plain that both m

      • Well, that's why I used the term "mostly get it right". The bell curve applies here like it does almost everywhere. That vast majority of encounters are unremarkable, some few are stellar, some few are atrocious. This goes especially true for ER encounters, where time is limited and precious, and snap decisions are the only way the system can function at all.

        Extended further out, over the collected experiences of single lifetimes, some few will encounter unreasonable numbers of shitty ones. Sucks to be in t

        • My fucking arm was broken. What kind of quack can't figure out a displaced fracture? X-rays aren't expensive. Why are you defending this shit?

          Pneumonia routinely kills people, but zero fucks given. They seemed annoyed when proven wrong except to say, and I quote, "wow, you must feel like you've been run over by a truck." with kind of a laugh.

          Don't even get me started on some of the more recent stuff. I had some long conversations with hospital administrators who, to their credit, took me seriously and
      • by sodul ( 833177 )

        I have family members that got almost killed by bad doctors several times, but the doctors did eventually succeed with a few of them.

        In one case my grandmother with dementia probably fell down the stairs, and her brain was bleeding. Local hospital was clueless, so my parents drove 1h to an other hospital that immediately diagnosed the issue and scheduled the surgery. A few minutes later they overheard the surgeon getting yelled at for wasting money on an elderly woman (French universal healthcare). Surgery

        • by tragedy ( 27079 )

          The food pyramid has also been debunked as made up pseudoscience.

          Well, yeah. I thought it was pretty well known that, like the "four food groups" before it, the food pyramid came from the USDA. The USDA does not serve the same function as the department of Health and Human services. The food pyramid was developed to promote the interests of MidWestern farmers, not health. That aligns with the mission of the USDA. My understanding is that most doctors, and especially nutritionists, have never paid attention to the food pyramid.

        • You seem to think that doctors are guessing - which they are - which means you should go guess for yourself. Which would imply that your opinion is as valid as theirs, and that you can patch the gap between your knowledge and theirs with some googling. Which is nonsense.

        • If you grew up in the US you will probably remember that most Dr were promoting DARE, which has been debunked as a scam to defraud money (look it up). The food pyramid has also been debunked as made up pseudoscience. in the 1990s margarine would protect your heart, while butter would give you heart attacks; debunked.

          I'm in my late 50s and absolutely remember all of that--plus the wonderful high fructose corn syrup that was saving us from the evil cane sugar and many other things.

          It's really important to have an advocate when the chips are down. I have filled this role for friends and family. One friend credits me with saving her life after I flat out bullied some ER doctors into doing their jobs after they dismissed her as a crybaby: she had a deep vein thrombosis. As I get older, I find myself more and more in need

      • Wow. That's quite a story. I've posted this similar story below. I've stuck with this GP. I spent a bit of cardiac muscle training him. :-)


        --
        A few years ago, I presented to my GP - a skilled one - with sharp chest pain as a 43 year old with a history high BP and cholesterol. I met him alone. I asked him if it could be a heart attack. He said no. He diagnosed indigestion (even though I've never had indigestion in my life) and sent me home. He didn't go an ECG or a Troponin test (sometimes called a Troponin-T

        • Glad you made it!

          One of the best uses for LLMs is criticism. I'm a software guy, but I rarely have LLMs write code because I'm fast enough as it is after doing it for a long time. What I really need is a super thorough critic to look over my shoulder, and current LLMs excel at it. Your idea is great and coupled with an advocate would really help improve outcomes.
      • I find myself not being open-kimono with certain medical professionals. My doc/PCP, sure, but when I go to urgent care for a twisted ankle, I certainly do not tell the 19 y/o med tech about family history of cancer and act like a captured spy when asked about my blood pressure. "I have an acute injury that needs attention. I'm not here to discuss my other issues that are currently managed by two different physicians."
    • Yes, the details matter.

      AI that can scan x-rays, analyze bloodwork, evaluate my poop for life-threatening conditions, or otherwise augment a doctor's treatment? AI models that look at millions of possible treatment plans and find the ones most likely to be successful? Wonderful.

      AI systems that remove the human connections? AI that evaluates treatment not based on medical efficacy but on cost models? AI used to make healthcare cheaper but not better outcomes? Do not want!

      A very real issue is the dumbing-

    • 1. What AI does the summarizing? So some evil corp has all the medical data that is otherwise protected in a long list of laws except "non-retaining" middle-ware tools... who could just profile you etc.
      2. These AI run within the country?
      3. Are they searching since google sucks now? AI search will suck soon enough. Are they getting medical advice and not just searching references?

  • by devslash0 ( 4203435 ) on Thursday December 04, 2025 @01:13PM (#65835643)

    By taking notes or using AI tools to process patient's data you are potentially exposing their sensitive, protected personal and medical details to the companies running those models. This should NEVER be allowed without a direct permission given to you by the patient.

    • So do you wave the "spooky" flag in front of the patient for the yes/no? Or do you sit down and do a real pro/con with them? I don't mean to belittle your point, but what exactly do you want to have happen here? The escape of medical information is truly well under way already, independent of AI.

      There's strong evidence these models already achieve equivalent or better diagnostic accuracy rates than GPs. That is objectively a good thing. And they will get better, given enough statistical data. Stopping the p

      • I want my doctor to ask me the moment I walk in if I agree to AI being used. Then, I want him to have a viable plan B if I don't agree.

      • To extend my point, they don't AI for diagnostic most of the time anyway. They use it for notes and crafting letters. Or they use smart glasses while reading my records. I don't want them to pass on my private details to 3rd party companies who may have stakes in insurance or advertising.

        • I get your point. And I agree about insurers and such. That's the downstream abuse I wholeheartedly despise.

          I just think that note taking is the easiest win for AI usage. If you have a regular doctor, a trail of complete, relatively unbiased notes can be invaluable. , especially for catching unusual issues. But the logistics of modern medicine don't leave time for that benefit to be realized.

          Ever try feeding a meeting transcript into an LLM, and asking for "meeting notes and a summary of three major themes"

          • When it comes to medicine, the devil is in the details, subtleties, combination of symptoms or circumstances. AI models hallucinate and twist too much to be reliable in this regard.

            • Well, yes and no. There's a difference between asking a model to build an entire logic chain and provide external references, and asking one to record and summarize a provided list of symptoms and an atomic discussion. An AI as scribe and cataloger is pretty safe usage. They aren't going to diagnose or prescribe. What they will do is make a complete, searchable record of a visit trivial to accomplish. A doctor using tools like that will be a better doctor for it.

      • The escape of medical information is truly well under way already, independent of AI.

        In the UK, most medical information will be classified as sensitive personal data, which means it has significant extra protections under our regular data protection law, in addition to the medical ethics implications of breaching patient confidentiality. Letting it escape is a big deal and potentially a serious threat to the business/career of any medical professional who does it. Fortunately the days of people sending that kind of data around over insecure email are finally giving way to more appropriate

    • Ignorance is bliss I guess. AI expert type systems here have zero need to share information to those companies or any external entity. Luddites like yourself would prefer to die from human error that for a doctor to use modern tools.
      • Not really. I'd be more than happy for medical professionals to use AI-supported diagnostic software but I'd like it work in an anonymous way. Instead of putting my details into diagnostic 3rd party software, I'd like my GP to generate an anonymous identifier in their patient management system, then pass my results or scans to the 3rd party with just that anonymous identifier. This way the 3rd party could still do the diagnostics without having my personal data (minus scans).

    • It is too late. Even HIPAA won't help here. The data is not being stored in a directly human retrievable method, so you can not PROVE in an unassailable manner that they are hoovering your data. I am dealing with this a lot right now with companies using AI products and insisting that the data is not used for training. If data is going out of my enclave, then it absolutely is being used for further training.

  • I bet the doctors are talking about appointment management, appointment reminders, even auto medication refills, and NOT diagnosing patients.

  • Considering some of the medical mistakes I've come across over the years, this may not be a big deal. If used correctly, it may improve things. I don't think that if is doing a lot of heavy lifting there.

    Medical people need to look up many things, and the "openness" of LLM prompts to find related info is more suited than closed search terms.

    But as often with professional tools, likely the excellent will excel and the mediocre will struggle and perhaps go down in mediocrity....

  • 10 years ago... (Score:5, Insightful)

    by gurps_npc ( 621217 ) on Thursday December 04, 2025 @01:27PM (#65835673) Homepage

    A decade ago Doctors would google your issues. Now they use AI. I bet the AI does a better, quicker job. 20 years ago they would look it up in a medical text book.

    Doctors are not memorization machines. Medical school doe snot teach them to memorize all the facts about diseases and the human body. Instead it teaches them how to ask the right questions. They need sources to ask those questions. The internet has those sources.

    Yes, there are other sources - hence only 30% of the doctors use AI.

    The key point is it is a doctor doing the research. You do not have the knowledge to judge the results the AI gives you, nor the knowledge to ask the right questions.

    There is a huge difference between asking AI "What to do if your arm is broken." verus asking "How to tell the difference between a displaced fracture and a communituted fracture"

    • Re:10 years ago... (Score:5, Interesting)

      by MpVpRb ( 1423381 ) on Thursday December 04, 2025 @01:57PM (#65835755)

      I once worked in the med biz as an engineer. The founder of the company was a very well respected surgeon. He said... Most doctors are idiots. Med school selects or rejects based on memorization skills, not intelligence, inventiveness or problem solving skills. If you can't memorize vast quantities of stuff quickly and accurately, you fail.

      Maybe AI tools will help fix this

      • That is likely not the fault of medical school, but the fault of society.

        One of the problem with being in the top 1% intelligence is that you are constantly surrounded by IDIOTS.

        Society needs a lot of educated people. If we limit 'the important work' to just the top 5%, then we will never have enough: Doctors, Judges, Lawyers, Engineers, Research Scientists, or Politicians. etc. So we have developed systems to let the top percentile (above average) people do work that we would much rather have the top 5%

        • Medical researchers are in the top 1% of intelligence. In terms of the doctor you will see in an office or an urgent care or a hospital, many of them went into it just to make money. Some few are in the top 1%, about 20% are in the top 10% of intelligence and the rest are in the 20-30% band. When I was an EMT I hated taking patients to the Emergency Department when a particular doctor was working - because he always sent them home, they were often returned to a DIFFERENT hospital about 20 miles further awa
    • by ceoyoyo ( 59147 )

      Doctors are not memorization machines.

      That's exactly what they are, and most of them are very good at it. Medical school is heavy on memorization.

      The good ones know they can't memorize everything and look stuff up when it's not something they see regularly. Medical school tends to discourage this for historical and placebo effect reasons.

  • by MpVpRb ( 1423381 ) on Thursday December 04, 2025 @01:53PM (#65835741)

    There is nothing wrong with exploring immature tech, it's actually a really good thing
    The problem comes when people trust it without reviewing its results
    Any doctor who believes the results generated by AI without review deserves what they get

    • by Tony Isaac ( 1301187 ) on Thursday December 04, 2025 @02:14PM (#65835817) Homepage

      This is an important point.

      A competent doctor will be competent with or without AI.
      An incompetent doctor will be incompetent with or without AI.

      AI is a tool, not a measure of competence or effectiveness.

      • Competence and experience shouldn't be thought of as an event / milestone reached. The more that a professional leans on AI, the less competent they become. If you don't keep skills in regular use, you become rusty.
        • The more that a professional leans on AI, the less competent they become

          I don't think so. This is like saying that the more a professional builder uses power tools (instead of hand tools), the less competent they become. It might be true that they become less competent with hand tools, but that is not the same as being less competent as a builder.

          Few developers these days know how to code in assembly language. There was a time when developers mourned the loss of this skill too.

  • A competent developer can use AI very effectively to speed up their work.
    An incompetent developer might use AI, but you still won't be able to trust their work.

    Why is being a doctor any different?

    AI is a productivity tool. If you know how to use it properly, it's a good thing. If you don't, it won't transform you into a competent professional.

  • Well DR been using software to help diagnose issues for years. They didnt call it AI and it was more code based as apposed to LLM. Given the correct training it might be useful cant be any worse than what doctors are doing without.
  • Can't possibly be worse.

  • Doctors are always making mistakes and getting sued, one of the problems is they have so much work they can't spend appropriate time on inidividuals, proper use of AI here has the potential to massively reduce the amount of mistakes
  • A few years ago, I presented to my GP - a skilled one - with sharp chest pain as a 43 year old with a history high BP and cholesterol. I met him alone. I asked him if it could be a heart attack. He said no. He diagnosed indigestion (even though I've never had indigestion in my life) and sent me home. He didn't go an ECG or a Troponin test (sometimes called a Troponin-T test, this is a blood test that dianoses ezymes released during a heart attack, when heart muscle is injured during a heart attack). He tol

  • Will we see any prosecutions?
  • Just eliminate the middleman. Chatgpt will patiently answer questions for hours. Doctor leaves after 5 minutes saying goodbye over his shoulder, next appt not available for 4 months.

A businessman is a hybrid of a dancer and a calculator. -- Paul Valery

Working...