Researchers Say AI Tool Used in Hospitals Invents Things No One Ever Said 138
AmiMoJo shares a report: Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near "human level robustness and accuracy." But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers.
Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments. Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
[...] It's impossible to compare Nabla's AI-generated transcript to the original recording because Nabla's tool erases the original audio for "data safety reasons," Nabla's chief technology officer Martin Raison said.
Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments. Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
[...] It's impossible to compare Nabla's AI-generated transcript to the original recording because Nabla's tool erases the original audio for "data safety reasons," Nabla's chief technology officer Martin Raison said.
LLMs are LLMs, news at 11 (Score:2, Insightful)
Re:LLMs are LLMs, news at 11 (Score:5, Insightful)
Traditional speech-to-text is often a bit on the rough side; but it has the 'virtue'(in a sense) of breaking in stupid visible ways if it chokes on a bit of input. You'll get a similar-sounding word that has no business being in that part of a sentence, or a sentence-length or two of total word salad if there's a burst of background noise or a mic level issue or something. It's not a pretty looking failure; but for the same reason it's not all that sneaky. Not as good as a system that gracefully admits that the input is unusable from timestamp A to timestamp B and tells you as much; but a fair way from the exceptionally smooth confabulation you get out of LLMs.
Re: (Score:3)
Why would you want "creativity" in a transcription?
For entertainment purposes. (Score:2)
Tonsils, testicles... Penicillin, penectomy... Tomato, potato...
Re: (Score:2)
If MS Teams uses an LLM to generate its transcripts, then no. That is _not_ a safe application. But MS may cause numerous errors in the Teams transcripts using another substandard technology. Does anybody know?
Re: (Score:2)
Teams already produces transcripts with numerous errors using substandard technology. Unless you speak slowly and use small words, expect to read the transcript a few times to make sure you understand what was said.
Re: (Score:2)
I usually just use it to check the microphone is working when I cannot easily listen in, e.g. because I am the one speaking. If it transcribes complete nonsense, the language setting is wrong or the audio is bad. If it mostly gets it right, the audio is good. But I just last week saw how bad some mistakes are even with good audio.
Re: (Score:2)
It definitely uses some type of language modeling.
I've seen parts of a transcript where people were speaking Chinese and it's attempts to transcribe it as English (not translate, but interpret the sounds as English words) were significantly better at coherence that auto predict when typing on a phone.
Re: (Score:2)
Hahaha, yes. Or it is set to English and you are speaking German. Complete nonsense. At least as of some weeks ago it tells you that the language setting may be wrong.
Re:LLMs are LLMs, news at 11 (Score:5, Insightful)
But seriously, transcription is probably relatively safe.
Sounds plausible, but the actual article we're discussing says otherwise.
Re: (Score:2, Insightful)
Transcription itself is safe. It's just that depending on it for anything significant isn't. (And note that the original data has been"erased for "data safety reasons".) So if the transcript says to use one drug, and the doctor says he ordered another there's no way to check.
This should all be obvious to those who read Slashdot, but folks who are the general public, or even medical professionals, might well not understand the problems.
Re: (Score:2)
Transcription itself is safe.
Apparently not.
It's just that depending on it for anything significant isn't.
So, it's safe, except when it's not.
Re: LLMs are LLMs, news at 11 (Score:2)
Re: (Score:2)
Except didn't we just hear these people claim [slashdot.org] that bullsh***ing I mean "hallucinations" was a solved problem?
Re:LLMs are LLMs, news at 11 (Score:5, Interesting)
Except didn't we just hear these people claim [slashdot.org] that bullsh***ing I mean "hallucinations" was a solved problem?
It won't ever be a solved problem of course. At some point, the "truth" will become AI referencing AI, so hallucinations will become whatever AI decides is truth.
Re: (Score:2)
The post you quoted definitely did NOT say that.
Also, literally everything an LLM produces is a "hallucination". Some just turn out to be pretty good.
Re: (Score:2)
Re: (Score:2)
That won't ever be a solved problem. People do it too. But people have a contextual map of the problem that lets them estimate the likelihood of misinterpretation, and this lets them eliminate lots of possible interpretations of the stuff they heard. (And even so, sometimes they eliminate the things actually heard.) This is one basis of humor.
Re: (Score:2)
Seems like LLM transcriptions should be able to put text in different colors to indicate different confidence levels, or the like... and that ought to be a priority customer need. But it's better marketing to pretend your product is always 100% able to do the job and 100% certain that it's right, and never draw attention to the possibility of mistakes. So I think part of the problem is humans here, not allowing the machine to say "I can't really make out what you're saying but here's a wild guess."
Meta Hallucinations (Score:2)
Except didn't we just hear these people claim [slashdot.org] that bullsh***ing I mean "hallucinations" was a solved problem?
Clearly that was a meta hallucination, the problem seems to be getting worse...
Re: (Score:2)
Re: (Score:2)
That seems to be the problem - it hears some speech and then hallucinates extra sentences that seem to fit the structure.
Re: (Score:2)
LLMs don't "know" anything.
Re: (Score:3)
Transcriptions of medical interviews that include hallucinations are *not* "relatively safe" - they are inherently DANGEROUS.
If you can't see that, I don't know how to help you.
Re: (Score:2)
If you can't see that, I don't know how to help you.
Just ask ChatGPT. I'm sure it has an answer.
Re: (Score:2)
But seriously, transcription is probably relatively safe. Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.
This is absolutely not true - or at least you don't know what knowing "language structure" actually entails. Listening to audio and matching sounds to phonemes is not sufficient or the transcription problem would have been solved without LLMs. When you add that "language structure" thing that means matching phonemes to potential words and word sequences and using context to determine what choice is correct. And that means that it has to have a representation of what context -- possible meanings -- and that
Re: (Score:2)
"Listening to audio and matching sounds to phonemes is not sufficient or the transcription problem would have been solved without LLMs. "
It was solved, in 1997. And today error rates are lower then human transcribers. No need for an LLM.
Re: (Score:2)
Transcription is not summarization though, it is transcribing, word for word. There is no need for any "language structure". Apparently LLM's are not very good at that.
Re: (Score:2)
But seriously, transcription is probably relatively safe. Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.
The trouble with transcription is it's insufficient.
A lot of understanding language isn't just decoding the waveforms, it's matching the sounds to words based on context.
It's like those videos floating around a few years ago with misheard lyrics for songs [youtube.com]. Once you set the expectation it really sounds like they're singing the other lyrics because the sounds aren't that different.
That's why transcription needs LLMs to really work properly, it's not just what word it sounds like, it's what word makes sense in
Re: (Score:2)
Like summarization, transcription doesn't rely on the LLM to "know" anything except language structure, which is what it's good at.
LLMs don't "know" anything except "language structure". Summaries are just as likely to contain nonsense as any other LLM output.
Transcription could be safe, but the approach Whisper uses clearly isn't. It doesn't work by identifying sequences of phonemes and converting those into text like other approaches, it works in a similar way to other encoder-decoder transformer models, with a few tweaks. You can even give it a prompt to guide the output. It is not surprisingly in the least that it "hallucinates
Gosh, I can't imagine who would want this. (Score:2)
Re: (Score:2)
Does it save a nickel in operating costs?
Re: (Score:2)
The problem is ... (Score:3)
... no one really knows in detail how these things actually do what they do. They understand the high level feeding in data and guff about N dimentional matrices of semantic relations, they understand the low level side of back propagation setting neural weights, but theres that fuzzy in the middle part to which no one can quite get their hear around whats happening. Frankly given these models have ever increasing billions of artificial neurons I wonder if anyone really will.
Re:The problem is ... (Score:5, Informative)
Or to be more precise, while the actual mechanisms are somewhat understood, the training data is generally not understood at all.
Re: (Score:2)
That's important, alright, but you also don't know which part of the training data the AI was paying attention to. There have been examples where it was attending to a time of day code encoded into the photo, that people didn't even see.
Re: (Score:2)
It is both, actually. The base mechanisms are well understood, _but_ what actually happens when they are applied to large training data and the results from training on a large data-set is definitely not well understood. Your date-code example is a good one.
Re: (Score:2)
People know how they work, it is only the "billions" that makes them beyond simple prediction. Humans created them, they didn't accidentally spring into existence.
Re: (Score:2)
Google emergent properties.
Erase the original for "safety"? Are you insane? (Score:5, Insightful)
Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments.
Okay, that's a problem. A serious problem by any standard.
Nabla's tool erases the original audio for "data safety reasons,"
And that's a much, much bigger and more serious problem. Without the original how would you even know if anything was changed, added, or removed? Obvious things, sure, but what if a dosage was altered or the results of a biopsy (for example) were reported as "clean" when in fact it was not?
Re: (Score:2)
It almost sounds to me like the AI generated text is a dubious legal dodge to avoid being responsible for HIPPA compliance.
Which raises the question of whether they're turning around and selling the (dubiously accurate, hallucinated) medical conversations to advertising partners or something.
Re: (Score:3)
This clearly is to make litigation harder. Avoiding HIPPA compliance may also be a factor. The deletion is in any case clearly malicious.
Re: (Score:2)
If patients haven't given permission for their visit to be recorded then why should their visit allowed to be transcribed?
I don't see any common scenario where someone would want to keep something private but allow a less accurate version to be recorded.
Re: Erase the original for "safety"? Are you insa (Score:2)
My doctor asked for consent to record for transcription and I live in a single party audio recording consent state! Not that hard to do...
Re: (Score:2)
Hmm. I forgot this is probably a finding from the US. In the EU, consent _must_ be given before any recording or processing is allowed.
Re: (Score:3, Interesting)
Start with assuming someone had lazy logic, which is 9 times out of 10 the fault of something, before you jump to nefarious reasons.
Re: (Score:2)
That doesn't mesh with "date safety" as a stated rationale. I'm all for Hanlon's razor, but this theory doesn't quite match the data.
Re: (Score:2)
Artificial intelligence is not able to overcome natural stupidity. AI, in its current form, is not ready for prime time. But dumbass humans lookin' to save a few more pennies will happily let it play in prime time, while declaring great victory over some nefarious imagined foe.
This has been the only real fear of AI I've had all along. Not that it's going to replace us well. But that it'll be used to replace us poorly. In critical roles. Like hospital administration. Oh well. Not like the uber-rich will use
Re: (Score:2)
When sending a recording to a third-party transcription service, it makes perfect sense for the service to delete the recording when they're done transcribing it. Why would a doctor ever send the one and only copy to the transcriptionist? I mean, back in the days of dictaphones and physical tapes, maybe. But not in the past 25 years.
If the transcription is done in-house by a tool it makes a lot less sense to auto-delete the original. Still, the solution is easy enough - don't give the tool the one and o
Re: (Score:2)
... HIPPA compliance...
It's "HIPAA" - Health Insurance Portability and Accountability Act.
Re:Erase the original for "safety"? Are you insane (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nabla's tool erases the original audio for "data safety reasons,"
And that's a much, much bigger and more serious problem. Without the original how would you even know if anything was changed, added, or removed? Obvious things, sure, but what if a dosage was altered or the results of a biopsy (for example) were reported as "clean" when in fact it was not?
Any doctor that uses such a system should lose their medical license, and be criminally prosecuted for every bad thing that happens before they get caught.
This article would be more helpful (Score:2)
Re: (Score:2)
If it would specify the *exact* hospitals that are using these tools so everyone here can avoid those places. The article doesn't mention the names of these facilities so it's not generally useful to the readership that is trying to make an informed decision.
No fear, citizen! Soon insurance will require AI transcripts from any hospital they provide payment to as a safety measure. You and your doctor need not fear. Neither of you will be in charge of the decisions.
And the hype train rolls down the track... (Score:5, Insightful)
...picking up speed.
LLMs exhibited unexpected emergent behavior. This got the train rolling.
Investors hopped aboard, and the speed increased. Problem is, investors want profits NOW.
Early adopters hopped aboard because they needed to convince their investors that they were using "the next big thing" and it allowed them to reduce costs.
Problem is, AI is a research project that will take years to be really useful and today's offerings suck mightily for real work.
Expect the crapfest to continue as the hype train continues gaining speed
Re: (Score:2)
Maybe we will get lucky and there will be a rather abrupt and terminal stop: An LLM may be involved in somebody getting killed by malpractice and the hospital responsible gets sued into the ground.
LLMs are good for only some things (Score:2)
LLM are good at identifying things (Cancer, cars, etc.)
There are not good at making complex decisions or programming. That needs to be something else on top of LLMs.
Re: (Score:2)
LLM are good at identifying things (Cancer, cars, etc.)
There are not good at making complex decisions or programming. That needs to be something else on top of LLMs.
And yet we are told by the AI bros that software engineers and programmers will be redundant inside of a few years because soon anybody can be a 'prompt engineer' and create sophisticated software easily using simple written instructions. Are you saying that these peerless geniuses are wrong?
Re: LLMs are good for only some things (Score:3)
No they are not.
CNNs combined with other neural net forms are good at those tasks.
LLMs are LANGUAGE models, and are related to structures that are good at transforming language: translation, summarization, etc.
But media hype and LLM bros decided that any AI or algorithmic technique should be replaced by their bullshit generators.
"I have a hammer to sell, so everything should be solved by hitting stuff."
"Safety reasons", huh? (Score:5, Informative)
"It's impossible to compare Nabla's AI-generated transcript to the original recording because Nabla's tool erases the original audio for "data safety reasons," Nabla's chief technology officer Martin Raison said."
"That's safety for us, not for you."
Re: (Score:2)
"That's safety for us, not for you."
What if he actually said that, but the AI tool erased it?
Being used to summarize case notes (Score:5, Insightful)
Yes? LLMs hallucinate? That cannot be prevented? (Score:2)
Who has been living under a rock here?
Re: (Score:2)
It's not an LLM. It a voice-to-text tool.
Re: (Score:2)
It is by OpenAI and "AI powered". Do you really think it is not an LLM that does the work?
Re: (Score:2)
Who has been living under a rock here?
Whisper is not an LLM.
For people who need reliably use the big model, take advantage of whisper confidence scores, use a low temperature and do a duplicate run of the same samples and compare results for similarity.
"Researchers aren't certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing."
Or don't do any of that and be like these clowns.
Then why are you using it? (Score:2)
Re: (Score:2)
For entertainment sure, for anything involving real life? WHY?
Because C-suites, which hospitals now answer to thanks to the profitization of the entire health system, are seeing the same dollar signs every other C-suite sees when the AI prophets start talking about all the savings to be had by cutting staff and replacing them with AI. Who cares if it's accurate? It might save money! And there is no greater moral imperative in our universe than saving money!
Re: (Score:2)
For entertainment sure, for anything involving real life? WHY?
Because there are cases, well-documented cases, of trained ML systems that match or exceed human-level capacity at certain tasks, like reading radiograms to detect breast cancer. At this point in time, I'd rather trust an ML system than getting a radiogram read by someone who isn't in a major city. In not too long, we won't have humans reading radiograms at all, except in the most zebra of cases.
But back to the subject at hand: transcription. Having done a similar task of recognizing templated signals in
Humans hallucinate... (Score:3)
...computer software produces erroneous results.
Let's stop anthropomorphizing these language models please.
They don't think, they don't reason, they don't "make things up", and they don't hallucinate.
Re:Humans hallucinate... (Score:4, Funny)
Holup (Score:2)
Re: (Score:2)
You know what kind.
Erases the original recording? (Score:2)
That alone is a major lawsuit. And if the transcription is incorrect, and results in very bad followup - like amputation when it wasn't called for, or lack of treatment when it was, that's billions and billions.
Copy a bad 3 as a perfect 8 or vice versa (Score:2)
We've had this for about 20 years already with the copy machines. Instead of letting a copy degrade from something nearly unreadable to something even worse they'll make it better. This whole LLM thing is different just in scale. Sure, one might say that's the same difference between a squid and Einstein but I don't think see it yet.
ChatGPT does this too (Score:2)
I have not had it happen with the "advanced mode" version, but as recent as 2 weeks ago, the the ChatGPT voice chat tool would sometimes interpret background noise as either far-east Asian language or short phrases like "Thanks." I used the tool in my car and there is a fair bit of road noise. so if I paused during a conversation, it would think I said "Thanks" and keep replying with things like "You are welcome, I am happy to help!" At least it assumed I said good things?
Uh huh (Score:2)
Nabla's tool erases the original audio for "data safety reasons,"
It's in the manual, indexed under CYA.
W1A predicted this years ago (Score:2)
Generative solutions do not belong in science (Score:2)
I have this conversation with my co-workers frequently. Generative AI solutions have no place in science settings. Science is not about making stuff up. We spend a lot of time carefully documenting empirical evidence and our forward decisions are made from the data, there is no room for fabrication of any kind.
Will multiple tries generate the same errors? (Score:2)
If not, can they do multiple transcriptions of the same audio? The accurate ones will all be alike while the error ones will all be different from each other and the accurate one.
Re: (Score:2)
By their words ye shall know them (Score:2)
"Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments".
Presumably in decreasing order of importance?
We have good transcription tools (Score:2)
Because it operates so differently, Generative AI might make a good automated QA tool -- but not a primary transcriber.
Re: (Score:2)
Because AI is fotw cool.
surprise that it exhibits human like behaviour? (Score:2)
commit crimes
need sleep
probably watch porn
Kurzweil said similar in his books... I paraphrase but basically he said, we will use ourselves, our minds, our brains, as the blueprint for Artificial Intelligence becauuuuse it's the best and perhaps only model we know. So I'll just stand on the shoulders of giants and take an obvious leap: these models will exhibit all the foibles of humanity. We have "hallucinations" now... crimes, lies, abusive beha
Re: (Score:2)
"...So I'll just stand on the shoulders of giants ..."
That is not what you're doing, you just can't tell the difference.
" We have "hallucinations" now... crimes, lies, abusive behaviour, self aggrandizement can't be far behind."
No. "Hallucinations" are precisely how neural networks work, it's not an accident, and it doesn't predict other undesirable behaviors.
Re: (Score:2)
also love or hate, Kurzweil has cast a long shadow on modern AI, rather longer than yours... or mine
Looking forward to the first documemented AI crime. You heard it first here.
Now explain to the class how "Hallucinations are precisely now neural networks work".
I'll make coffee. Also I'll leave now, talk directly to the class.
Data Safety? (Score:2)
Nabla's tool erases the original audio for "data safety reasons,"
What possible "data safety reason" could there be in having the audio of a transcript around?
This isn't like Biden's interview tape, when the umms, stutters, and pauses would all be endlessly analyzed for competitive reasons.
No, the only "data safety reason" they have is that it's much safer for them not to have that data so that they can't be found *wrong*.
There's no way to go back and re-analyze the audio so they never have to admit to a mis
"data safety reasons" (Score:2)
bad enough.. (Score:2)
ChatGPT is equally guilty .. (Score:2)
Re:Human Mistakes (Score:5, Insightful)
Humans can be held accountable. If you had a medical scribe who wrote "Patient has testicular cancer requires immediate amputation" for no goddamn reason on a transcript, they'd be liable for medical malpractice.
If a computer does it, "oops, bug, no one's fault really, but your balls are in this nice jar here"
This is one of the Big Risks of the current crop of AI horseshit, that a (unjustifiable) decision can be made without anyone being "to blame"
Re: (Score:3)
Maybe we should make companies responsible for the software the put out.
Re: (Score:2)
Humans can be held accountable. If you had a medical scribe who wrote "Patient has testicular cancer requires immediate amputation" for no goddamn reason on a transcript, they'd be liable for medical malpractice.
If a computer does it, "oops, bug, no one's fault really, but your balls are in this nice jar here"
This is one of the Big Risks of the current crop of AI horseshit, that a (unjustifiable) decision can be made without anyone being "to blame"
I expect, if the AI hype remains within the corporate circles, we're going to see a whole lot of, "Oops. The AI did it. Nothing to be done. Next," throughout the world. It's insane how much people have bought into the hype without one moment of critical thought or, you know, experimentation to see whether it's true or not. Full steam ahead and damn the torpedoes. We got pennies to save and people to get off the payroll!
Re: (Score:2)
Don't doctors still need to own their notes?
I know if a lawyer was using AI, they're still responsible for the output of they sign it (and courts have not taken favorably to lawyers not reviewing the output when it's nonsense).
I would think a doctor is responsible for what ever is the final submission to a patients file similarly.
Re: (Score:2)
Re: (Score:3)
I put Alexa through a similar but simpler loop a while ago.
Alexa, where am I?
.... no.
You are in Lancaster.
Alexa, where is Lancaster?
Lancaster is a medium sized city in the UK.
Alexa, am I in the UK?
Re: (Score:3)