Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Researchers Say AI Tool Used in Hospitals Invents Things No One Ever Said 103

AmiMoJo shares a report: Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near "human level robustness and accuracy." But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers.

Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments. Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

[...] It's impossible to compare Nabla's AI-generated transcript to the original recording because Nabla's tool erases the original audio for "data safety reasons," Nabla's chief technology officer Martin Raison said.

Researchers Say AI Tool Used in Hospitals Invents Things No One Ever Said

Comments Filter:
  • But seriously, transcription is probably relatively safe. Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.
    • by fuzzyfuzzyfungus ( 1223518 ) on Monday October 28, 2024 @11:51AM (#64899947) Journal
      The problem is that 'knowing' language structure and more or less nothing else is the perfect recipe for apparently plausible, syntactically appropriate, nonsense lurking in.

      Traditional speech-to-text is often a bit on the rough side; but it has the 'virtue'(in a sense) of breaking in stupid visible ways if it chokes on a bit of input. You'll get a similar-sounding word that has no business being in that part of a sentence, or a sentence-length or two of total word salad if there's a burst of background noise or a mic level issue or something. It's not a pretty looking failure; but for the same reason it's not all that sneaky. Not as good as a system that gracefully admits that the input is unusable from timestamp A to timestamp B and tells you as much; but a fair way from the exceptionally smooth confabulation you get out of LLMs.
      • LLMs are meant to be creative. They simply output what is likely to come next based on the corpus. There is nothing nefarious going on here, it just makes generative content. If the prompt engineer creates better guide-rails around the LLM, it will work significantly better for the use case they need. They might also create some checks and verification steps before returning to the user. Search engines are not LLMs and LLMs are not search engines.
    • by gweihir ( 88907 )

      If MS Teams uses an LLM to generate its transcripts, then no. That is _not_ a safe application. But MS may cause numerous errors in the Teams transcripts using another substandard technology. Does anybody know?

      • But MS may cause numerous errors in the Teams transcripts using another substandard technology.

        Teams already produces transcripts with numerous errors using substandard technology. Unless you speak slowly and use small words, expect to read the transcript a few times to make sure you understand what was said.
      • by AvitarX ( 172628 )

        It definitely uses some type of language modeling.

        I've seen parts of a transcript where people were speaking Chinese and it's attempts to transcribe it as English (not translate, but interpret the sounds as English words) were significantly better at coherence that auto predict when typing on a phone.

    • by XXongo ( 3986865 ) on Monday October 28, 2024 @12:07PM (#64900001) Homepage

      But seriously, transcription is probably relatively safe.

      Sounds plausible, but the actual article we're discussing says otherwise.

      • Re: (Score:2, Insightful)

        by HiThere ( 15173 )

        Transcription itself is safe. It's just that depending on it for anything significant isn't. (And note that the original data has been"erased for "data safety reasons".) So if the transcript says to use one drug, and the doctor says he ordered another there's no way to check.

        This should all be obvious to those who read Slashdot, but folks who are the general public, or even medical professionals, might well not understand the problems.

        • by XXongo ( 3986865 )

          Transcription itself is safe.

          Apparently not.

          It's just that depending on it for anything significant isn't.

          So, it's safe, except when it's not.

    • Except didn't we just hear these people claim [slashdot.org] that bullsh***ing I mean "hallucinations" was a solved problem?

    • by zekica ( 1953180 )
      They only need to add "do not hallucinate!" to the prompt.
    • by AmiMoJo ( 196126 )

      That seems to be the problem - it hears some speech and then hallucinates extra sentences that seem to fit the structure.

    • That AI will make a good Liberal politician or news reporter.
    • by dfghjk ( 711126 )

      LLMs don't "know" anything.

    • by jvkjvk ( 102057 )

      Transcriptions of medical interviews that include hallucinations are *not* "relatively safe" - they are inherently DANGEROUS.

      If you can't see that, I don't know how to help you.

    • But seriously, transcription is probably relatively safe. Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.

      This is absolutely not true - or at least you don't know what knowing "language structure" actually entails. Listening to audio and matching sounds to phonemes is not sufficient or the transcription problem would have been solved without LLMs. When you add that "language structure" thing that means matching phonemes to potential words and word sequences and using context to determine what choice is correct. And that means that it has to have a representation of what context -- possible meanings -- and that

      • by Holi ( 250190 )

        "Listening to audio and matching sounds to phonemes is not sufficient or the transcription problem would have been solved without LLMs. "

        It was solved, in 1997. And today error rates are lower then human transcribers. No need for an LLM.

    • by Holi ( 250190 )

      Transcription is not summarization though, it is transcribing, word for word. There is no need for any "language structure". Apparently LLM's are not very good at that.

    • But seriously, transcription is probably relatively safe. Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.

      The trouble with transcription is it's insufficient.

      A lot of understanding language isn't just decoding the waveforms, it's matching the sounds to words based on context.

      It's like those videos floating around a few years ago with misheard lyrics for songs [youtube.com]. Once you set the expectation it really sounds like they're singing the other lyrics because the sounds aren't that different.

      That's why transcription needs LLMs to really work properly, it's not just what word it sounds like, it's what word makes sense in

    • by narcc ( 412956 )

      Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.

      LLMs don't "know" anything except "language structure". Summaries are just as likely to contain nonsense as any other LLM output.

      Transcription could be safe, but the approach Whisper uses clearly isn't. It doesn't work by identifying sequences of phonemes and converting those into text like other approaches, it works in a similar way to other encoder-decoder transformer models, with a few tweaks. You can even give it a prompt to guide the output. It is not surprisingly in the least that it "hallucinates

  • He who controls the past, controls the future. Or so the thinking goes among those who try.
    • Does it save a nickel in operating costs?

      • Save a nickel by randomly inserting racist tirades into unrelated content? No. That's clearly a quote-unquote human being imposing their agenda. But we've always known so-called AI was just going to be a megaphone for human narcissists.
  • by Viol8 ( 599362 ) on Monday October 28, 2024 @11:35AM (#64899891) Homepage

    ... no one really knows in detail how these things actually do what they do. They understand the high level feeding in data and guff about N dimentional matrices of semantic relations, they understand the low level side of back propagation setting neural weights, but theres that fuzzy in the middle part to which no one can quite get their hear around whats happening. Frankly given these models have ever increasing billions of artificial neurons I wonder if anyone really will.

  • by JustAnotherOldGuy ( 4145623 ) on Monday October 28, 2024 @11:39AM (#64899905) Journal

    Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments.

    Okay, that's a problem. A serious problem by any standard.

    Nabla's tool erases the original audio for "data safety reasons,"

    And that's a much, much bigger and more serious problem. Without the original how would you even know if anything was changed, added, or removed? Obvious things, sure, but what if a dosage was altered or the results of a biopsy (for example) were reported as "clean" when in fact it was not?

    • It almost sounds to me like the AI generated text is a dubious legal dodge to avoid being responsible for HIPPA compliance.

      Which raises the question of whether they're turning around and selling the (dubiously accurate, hallucinated) medical conversations to advertising partners or something.

      • by gweihir ( 88907 )

        This clearly is to make litigation harder. Avoiding HIPPA compliance may also be a factor. The deletion is in any case clearly malicious.

        • Or maybe it's because patients haven't given and don't want to give consent to have their visit recorded in a permanent fashion. These systems are supposed to write visit summaries that doctors are typing by hand immediately after seeing the patient. They're not intended to provide a verbatim transcript nor be entered into a record before review by the doctor.
      • I doubt it. In my experience nefarious reasons are quite rare despite what we see in the media. More likely it was some uneducated person deciding that the transcriptions take up less data storage so deleting the recordings would save them some money, while thinking out very poorly the results of this particular action.

        Start with assuming someone had lazy logic, which is 9 times out of 10 the fault of something, before you jump to nefarious reasons.

        • That doesn't mesh with "date safety" as a stated rationale. I'm all for Hanlon's razor, but this theory doesn't quite match the data.

        • Artificial intelligence is not able to overcome natural stupidity. AI, in its current form, is not ready for prime time. But dumbass humans lookin' to save a few more pennies will happily let it play in prime time, while declaring great victory over some nefarious imagined foe.

          This has been the only real fear of AI I've had all along. Not that it's going to replace us well. But that it'll be used to replace us poorly. In critical roles. Like hospital administration. Oh well. Not like the uber-rich will use

        • When sending a recording to a third-party transcription service, it makes perfect sense for the service to delete the recording when they're done transcribing it. Why would a doctor ever send the one and only copy to the transcriptionist? I mean, back in the days of dictaphones and physical tapes, maybe. But not in the past 25 years.

          If the transcription is done in-house by a tool it makes a lot less sense to auto-delete the original. Still, the solution is easy enough - don't give the tool the one and o

      • ... HIPPA compliance...

        It's "HIPAA" - Health Insurance Portability and Accountability Act.

    • by wildstoo ( 835450 ) on Monday October 28, 2024 @12:03PM (#64899989)
      "Hey, as far as we know, Dr. Patsy really was recommending ethnic cleansing as an infection control method. Without the audio you'll just have to take our idiotic LLM's word for it."
    • Tee and save the audio stream yourself. It is a no-brainer on any half decent computer system.
      • That's fine if you're running the program on a regular computer, but what if it's on a special purpose appliance, that doesn't even have a CLI?
    • by taustin ( 171655 )

      Nabla's tool erases the original audio for "data safety reasons,"

      And that's a much, much bigger and more serious problem. Without the original how would you even know if anything was changed, added, or removed? Obvious things, sure, but what if a dosage was altered or the results of a biopsy (for example) were reported as "clean" when in fact it was not?

      Any doctor that uses such a system should lose their medical license, and be criminally prosecuted for every bad thing that happens before they get caught.

  • How does it compare to humans as far as accuracy? Humans and definitely not infallible.

    • Re:Human Mistakes (Score:5, Informative)

      by i kan reed ( 749298 ) on Monday October 28, 2024 @12:00PM (#64899971) Homepage Journal

      Humans can be held accountable. If you had a medical scribe who wrote "Patient has testicular cancer requires immediate amputation" for no goddamn reason on a transcript, they'd be liable for medical malpractice.

      If a computer does it, "oops, bug, no one's fault really, but your balls are in this nice jar here"

      This is one of the Big Risks of the current crop of AI horseshit, that a (unjustifiable) decision can be made without anyone being "to blame"

      • Maybe we should make companies responsible for the software the put out.

      • Humans can be held accountable. If you had a medical scribe who wrote "Patient has testicular cancer requires immediate amputation" for no goddamn reason on a transcript, they'd be liable for medical malpractice.

        If a computer does it, "oops, bug, no one's fault really, but your balls are in this nice jar here"

        This is one of the Big Risks of the current crop of AI horseshit, that a (unjustifiable) decision can be made without anyone being "to blame"

        I expect, if the AI hype remains within the corporate circles, we're going to see a whole lot of, "Oops. The AI did it. Nothing to be done. Next," throughout the world. It's insane how much people have bought into the hype without one moment of critical thought or, you know, experimentation to see whether it's true or not. Full steam ahead and damn the torpedoes. We got pennies to save and people to get off the payroll!

      • by AvitarX ( 172628 )

        Don't doctors still need to own their notes?

        I know if a lawyer was using AI, they're still responsible for the output of they sign it (and courts have not taken favorably to lawyers not reviewing the output when it's nonsense).

        I would think a doctor is responsible for what ever is the final submission to a patients file similarly.

  • Are the robotic transcriptions marked as such when they are copied into your medical records?
    My doctors already enter stuff into the records that they never said. It appears that they paste canned paragraphs into "Notes" of visits.
    I've been listening very carefully and taking notes.
    (I accompany a disabled friend who uses a different health system and different doctors. Same thing.)

    "...the tool has been used to transcribe an estimated 7 million medical visits."

  • If it would specify the *exact* hospitals that are using these tools so everyone here can avoid those places. The article doesn't mention the names of these facilities so it's not generally useful to the readership that is trying to make an informed decision.
    • If it would specify the *exact* hospitals that are using these tools so everyone here can avoid those places. The article doesn't mention the names of these facilities so it's not generally useful to the readership that is trying to make an informed decision.

      No fear, citizen! Soon insurance will require AI transcripts from any hospital they provide payment to as a safety measure. You and your doctor need not fear. Neither of you will be in charge of the decisions.

  • by MpVpRb ( 1423381 ) on Monday October 28, 2024 @11:46AM (#64899939)

    ...picking up speed.
    LLMs exhibited unexpected emergent behavior. This got the train rolling.
    Investors hopped aboard, and the speed increased. Problem is, investors want profits NOW.
    Early adopters hopped aboard because they needed to convince their investors that they were using "the next big thing" and it allowed them to reduce costs.
    Problem is, AI is a research project that will take years to be really useful and today's offerings suck mightily for real work.
    Expect the crapfest to continue as the hype train continues gaining speed

    • by gweihir ( 88907 )

      Maybe we will get lucky and there will be a rather abrupt and terminal stop: An LLM may be involved in somebody getting killed by malpractice and the hospital responsible gets sued into the ground.

  • LLM are good at identifying things (Cancer, cars, etc.)

    There are not good at making complex decisions or programming. That needs to be something else on top of LLMs.

    • LLM are good at identifying things (Cancer, cars, etc.)

      There are not good at making complex decisions or programming. That needs to be something else on top of LLMs.

      And yet we are told by the AI bros that software engineers and programmers will be redundant inside of a few years because soon anybody can be a 'prompt engineer' and create sophisticated software easily using simple written instructions. Are you saying that these peerless geniuses are wrong?

    • No they are not.

      CNNs combined with other neural net forms are good at those tasks.

      LLMs are LANGUAGE models, and are related to structures that are good at transforming language: translation, summarization, etc.

      But media hype and LLM bros decided that any AI or algorithmic technique should be replaced by their bullshit generators.

      "I have a hammer to sell, so everything should be solved by hitting stuff."

  • by Chris Mattern ( 191822 ) on Monday October 28, 2024 @11:58AM (#64899961)

    "It's impossible to compare Nabla's AI-generated transcript to the original recording because Nabla's tool erases the original audio for "data safety reasons," Nabla's chief technology officer Martin Raison said."

    "That's safety for us, not for you."

  • by RobinH ( 124750 ) on Monday October 28, 2024 @11:59AM (#64899967) Homepage
    My wife knows a psychologist who's using an AI technology to summarize her case notes, which seems like a sensible thing to do until you ask some fundamental questions like, "how do you know if it's accurate?" and "where is the data being stored and processed?" She might be knowledgeable and careful about it, but you just know there are professionals out there who believe AI is "accurate enough" and won't bother to check the results. This is a big problem which is going to take years for professional organizations to even acknowledge and then there will be a big fight over regulating it.
  • Who has been living under a rock here?

  • For entertainment sure, for anything involving real life? WHY?
    • For entertainment sure, for anything involving real life? WHY?

      Because C-suites, which hospitals now answer to thanks to the profitization of the entire health system, are seeing the same dollar signs every other C-suite sees when the AI prophets start talking about all the savings to be had by cutting staff and replacing them with AI. Who cares if it's accurate? It might save money! And there is no greater moral imperative in our universe than saving money!

    • by pz ( 113803 )

      For entertainment sure, for anything involving real life? WHY?

      Because there are cases, well-documented cases, of trained ML systems that match or exceed human-level capacity at certain tasks, like reading radiograms to detect breast cancer. At this point in time, I'd rather trust an ML system than getting a radiogram read by someone who isn't in a major city. In not too long, we won't have humans reading radiograms at all, except in the most zebra of cases.

      But back to the subject at hand: transcription. Having done a similar task of recognizing templated signals in

  • by Some Guy ( 21271 ) on Monday October 28, 2024 @12:05PM (#64899997)

    ...computer software produces erroneous results.

    Let's stop anthropomorphizing these language models please.

    They don't think, they don't reason, they don't "make things up", and they don't hallucinate.

  • Aside from the very valid concerns about accuracy, what kind of idiot names a company NABLA?
  • That alone is a major lawsuit. And if the transcription is incorrect, and results in very bad followup - like amputation when it wasn't called for, or lack of treatment when it was, that's billions and billions.

  • We've had this for about 20 years already with the copy machines. Instead of letting a copy degrade from something nearly unreadable to something even worse they'll make it better. This whole LLM thing is different just in scale. Sure, one might say that's the same difference between a squid and Einstein but I don't think see it yet.

  • I have not had it happen with the "advanced mode" version, but as recent as 2 weeks ago, the the ChatGPT voice chat tool would sometimes interpret background noise as either far-east Asian language or short phrases like "Thanks." I used the tool in my car and there is a fair bit of road noise. so if I paused during a conversation, it would think I said "Thanks" and keep replying with things like "You are welcome, I am happy to help!" At least it assumed I said good things?

  • In some markets, notably Florida, single company owns 70% of the healthcare market. Meaning wherever you go it's one company that owns basically everything.

    From a practical standpoint they're going to try to squeeze every penny they can out of the system and that's going to mean worse outcomes for you. And that's going to mean corner cutting like this whether it works or not.
  • Nabla's tool erases the original audio for "data safety reasons,"

    It's in the manual, indexed under CYA.

  • It's called hallucinations. Why are we still seeing those AI irrelevant "news", anyway? Same old, same silliness.
  • I have this conversation with my co-workers frequently. Generative AI solutions have no place in science settings. Science is not about making stuff up. We spend a lot of time carefully documenting empirical evidence and our forward decisions are made from the data, there is no room for fabrication of any kind.

  • If not, can they do multiple transcriptions of the same audio? The accurate ones will all be alike while the error ones will all be different from each other and the accurate one.

  • "Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments".

    Presumably in decreasing order of importance?

  • There are companies like Nexidia that have been providing good transcription tools for decades. Why in the world would one choose to use GENERATIVE AI for a task that can be consistently performed algorithmically? Especially for something like Medical Transcription where the impact of a hallucination could be someone's life?

    Because it operates so differently, Generative AI might make a good automated QA tool -- but not a primary transcriber.
  • I'm going to go out on a limb and predict that LLM's will soon:
    commit crimes
    need sleep
    probably watch porn

    Kurzweil said similar in his books... I paraphrase but basically he said, we will use ourselves, our minds, our brains, as the blueprint for Artificial Intelligence becauuuuse it's the best and perhaps only model we know. So I'll just stand on the shoulders of giants and take an obvious leap: these models will exhibit all the foibles of humanity. We have "hallucinations" now... crimes, lies, abusive beha
    • by dfghjk ( 711126 )

      "...So I'll just stand on the shoulders of giants ..."

      That is not what you're doing, you just can't tell the difference.

      " We have "hallucinations" now... crimes, lies, abusive behaviour, self aggrandizement can't be far behind."

      No. "Hallucinations" are precisely how neural networks work, it's not an accident, and it doesn't predict other undesirable behaviors.

      • yoo hoo.. LLM's aren't neural networks..
        also love or hate, Kurzweil has cast a long shadow on modern AI, rather longer than yours... or mine

        Looking forward to the first documemented AI crime. You heard it first here.

        Now explain to the class how "Hallucinations are precisely now neural networks work".
        I'll make coffee. Also I'll leave now, talk directly to the class.
  • Nabla's tool erases the original audio for "data safety reasons,"

    What possible "data safety reason" could there be in having the audio of a transcript around?

    This isn't like Biden's interview tape, when the umms, stutters, and pauses would all be endlessly analyzed for competitive reasons.

    No, the only "data safety reason" they have is that it's much safer for them not to have that data so that they can't be found *wrong*.

    There's no way to go back and re-analyze the audio so they never have to admit to a mis

  • Whisper, developed by OpenAI, generally performs well at transcribing clear and straightforward speech, but like any AI transcription tool, it can sometimes “hallucinate,” or insert words or phrases that weren’t actually spoken. This typically happens in situations where:

    • Audio Quality: The audio is noisy or unclear, causing Whisper to make educated guesses that can lead to incorrect interpretations.
    • Accents and Dialects: Complex accents or less common dialects may increase the likelih
  • Would it be hide the evidence before people realize your product is shit and doesn't faithfully transcribe anything? By the time they'll figure out, management will be RICH and GONE

"I have just one word for you, my boy...plastics." - from "The Graduate"

Working...