Leaked Training Shows Doctors In New York's Biggest Hospital System Using AI (404media.co) 34
Slashdot reader samleecole shared this report from 404 Media:
Northwell Health, New York State's largest healthcare provider, recently launched a large language model tool that it is encouraging doctors and clinicians to use for translation, sensitive patient data, and has suggested it can be used for diagnostic purposes, 404 Media has learned. Northwell Health has more than 85,000 employees.
An internal presentation and employee chats obtained by 404 Media shows how healthcare professionals are using LLMs and chatbots to edit writing, make hiring decisions, do administrative tasks, and handle patient data. In the presentation given in August, Rebecca Kaul, senior vice president and chief of digital innovation and transformation at Northwell, along with a senior engineer, discussed the launch of the tool, called AI Hub, and gave a demonstration of how clinicians and researchers—or anyone with a Northwell email address—can use it... AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients' personally identifying and protected health information.
The demonstration also showed potential capabilities that included "detect pancreas cancer," and "parse HL7," a health data standard used to share electronic health records.
The leaked presentation shows that hospitals are increasingly using AI and LLMs to streamlining administrative tasks, and shows that some are experimenting with or at least considering how LLMs would be used in clinical settings or in interactions with patients.
An internal presentation and employee chats obtained by 404 Media shows how healthcare professionals are using LLMs and chatbots to edit writing, make hiring decisions, do administrative tasks, and handle patient data. In the presentation given in August, Rebecca Kaul, senior vice president and chief of digital innovation and transformation at Northwell, along with a senior engineer, discussed the launch of the tool, called AI Hub, and gave a demonstration of how clinicians and researchers—or anyone with a Northwell email address—can use it... AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients' personally identifying and protected health information.
The demonstration also showed potential capabilities that included "detect pancreas cancer," and "parse HL7," a health data standard used to share electronic health records.
The leaked presentation shows that hospitals are increasingly using AI and LLMs to streamlining administrative tasks, and shows that some are experimenting with or at least considering how LLMs would be used in clinical settings or in interactions with patients.
That is not good... (Score:2)
Essentially enshittification in the medical system. Well, as far as that has not yet happened in the US.
Re: (Score:3)
Is it a potential source for error? Sure. But, it’s not like the current system has a stellar record when it comes to medical errors.
Re: (Score:2)
how you train it really maters.. a good example is a past attempt that resulted in the realization that it didn't learn how to identify skin cancer at all, but rather learned to identify rulers..
https://www.sciencedirect.com/... [sciencedirect.com]
Re: (Score:2)
And who vets the training and how? Is it going to rely on ancient MySpace tombstones?
As we enjoy this musing tale, many hundreds of thousands of poison pages are purposefully trying to make AI do things that are unintended, and it's working. And it's only just begun.
Checking grammar is one thing, medicine uses a long vocabulary of strange names and interactions. Procedures, however, are something completely different.
Depending on AI is like depending on your drunk brother to drive you home in a hurricane.
Re: (Score:2)
If the system shaves 60 minutes of reading charts off their day, and they then need to spend 10 minutes double checking the LLM for errors, that’s still an extra 50 minutes they have to do extra stuff.
Observations from coding suggest it may be the other way round. Unless you are fine with them failing even more often for people with some more rare symptoms or issues.
Re: (Score:2)
something goes wrong?
The insurance company.
Re: (Score:3)
something goes wrong?
The insurance company.
I am sure the insurance company is thrilled about this plan
Re: Who's responsible when (Score:2)
Medical malpractice insurance companies will find this a boon to their business, I'm sure. Those doctors using AI should watch out for the increased premiums.
Re: (Score:2)
Since it was software, no one. That’s the beauty!
The doctor, obviously (Score:4, Interesting)
something goes wrong?
The doctor, obviously.
I don't think "The AI made a different diagnosis" would be a legitimate excuse to use in court.
As it happens, I'm in the middle of looking at all the AI offerings for writing. This is off-topic from the article, but I note that some of these are pretty good. There's websites full of AI generated stories, and some of *those* are pretty good as well. The first one I selected (randomly) was sort of a modern retake of "Magic Incorporated".
Go check out some of the features here [squibler.io], or here [sudowrite.com]. They're quite impressive.
I've been using AI as a tool to help writing instead of a crutch to write for me. 'Sorta like I use Rogets, when I know the meaning I want but are stumped on the word, or want to avoid using the same word twice in close proximity. I just today typed "Give me a few words, single words only, that can be used in fictional writing to describe getting tackled to the ground. Words such as "pow"", and ChatGPT helpfully gave me a list of useful alternatives that I didn't think of.
To go back on topic, doctors *should* be using AI... but as a tool and not as a doctor. Medicine is a very wide field, the doctor's knowledge is dated (from the year he graduated), and the AI might consider alternatives that he doesn't know about.
But the ultimate decision should rest with the doctor, any AI suggestions should be confirmed as the source of the problem, and so on.
AI should be used as a tool, and not a replacement for a real human.
(And of course, anyone who uses the AI to do their job for them is an abject fool, and we'll probably see a couple of these in flashy news stories before the government stomps on it with regulation.)
Re: (Score:2)
I just tried squibler.io - Pretty interesting. Here's an outline of a story I call "Wombat Blues".
Different Fur
The story introduces a young wombat with unique fur, making them stand out from the other wombats. They feel a growing sense of isolation as they are ostracized by their peers. The story establishes the wombat's loneliness and their longing for connection.
The Lonely Burrow
The wombat spends their days alone, retreating to their burrow where they contemplate their difference and the sadness it brings
Re: Who's responsible when (Score:2)
Certification? (Score:3, Interesting)
Re: (Score:2)
That is a ridiculous assertion that assumes that 'the AI' will have perfect awareness of relevancy... Which is in itself an anthromophic attribution.
Not to mention how thoroughly proven AIs are to making shit up that is in no way connected to the training data.
Re: (Score:2)
If the data that the AI was trained on is from certified individuals or from certification programs, the AI is a priori certified.
Nay, nay and thrice nay.
Lawsuit waiting to happen (Score:2)
Hope their hospital lawyers have their depends strapped on.
You don't need "leaked training" to know about thi (Score:5, Interesting)
There are TONS of health care companies, products, etc that are very publicly and explicitly using AI in healthcare settings. Just Google it for god's sake. It's not a secret. Did you just think nobody would buy it? That nobody would use it? Are you nuts?! Of course they will.
And it's not all as garbage as you might imagine. They're not just strapping ChatGPT to a prescription pad. There's a lot of well-thought out products that already had tons of useful functionality baked in. A lot of the time they're just adding some AI bullshit around an existing product so they can sell it.
You literally can't sell any product today unless the word "AI" is in it. The first thing the customer's going to do is ask you "But does it have AI? Your competitor has AI. Does that mean your product isn't as good?"
Please don't let the news media freak you out with their clickbait articles. Yeah, AI is bullshit. But it's not always as bad as it sounds.
Re: You don't need "leaked training" to know about (Score:1)
"detect pancreas cancer" and "parse HL7" (Score:2)
Considering that HL7 is in itself a form of metastasized cancer this shouldn't come as a surprise...
Not really new (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
A cardiologist friend of mine has complained for years about the automated system his hospital (a famous teaching institute) used to read EKGs. It could read ordinary ones quite well but was unreliable compared to trained human on unusual cases. I suspect that these AI systems will have the same problem. It is difficult to train such a system to recognize rare cases because they will be poorly represented in the data set. The basic problem is that people tend to have an exaggerated faith in automatic systems and are generally too overworked or lazy to double check them.
hmmm. You raise a good point about limitations in specialty cases, but I wonder if we’re approaching a tipping point for AIs in routine settings. An ophthalmologist and an oral surgeon friend of mine, both of whom run their own practices, asked me decades ago when they’d be able to hand off the routine tasks of diagnosing and prescribing—corrective lenses for one, dental fixtures for the other—to an expert system. At the time, back in the late ’90s and early 2000s, my answer as
Is AI still banned in New York schools? (Score:1)
Do schools ban cell phones because teachers are stupider and less entertaining and more sarcastic than just chatting with an AI about whatever tweaks your curiosity?
Doing the drudge work (Score:3)
"answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients' personally identifying and protected health information"
Relatively simple jobs that the machine is likely to do reasonably well. Probably it can help with some diagnoses. Seems innocuous to me.
Let's not confuse AI with AI (Score:1)
AIs that are specialized in detecting cancer should be used to detect cancer. They are very good at it.
AIs that aren't specialized in a particular task tend to cause problems. That's where we need to be very careful.
Just don't confuse the two types together.
Oh great (Score:1)
The fact that these tools are already used in the field, and there hasn't been a massive spike in preventable deaths in hospitals, is a testament to the resilience of the human body.