Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

Can the AI Driving ChatGPT Help To Detect Early Signs of Alzheimer's Disease? 31

The artificial intelligence algorithms behind the chatbot program ChatGPT -- which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries -- might one day be able to help doctors detect Alzheimer's Disease in its early stages. EurekaAlert: Research from Drexel University's School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI's GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia. Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer's -- leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders.

The current practice for diagnosing Alzheimer's Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support. Because language impairment is a symptom in 60-80% of dementia patients, researchers have been focusing on programs that can pick up on subtle clues -- such as hesitation, making grammar and pronunciation mistakes and forgetting the meaning of words -- as a quick test that could indicate whether or not a patient should undergo a full examination.

GPT-3, officially the third generation of OpenAI's General Pretrained Transformer (GPT), uses a deep learning algorithm -- trained by processing vast swaths of information from the internet, with a particular focus on how words are used, and how language is constructed. This training allows it to produce a human-like response to any task that involves language, from responses to simple questions, to writing poems or essays. GPT-3 is particularly good at "zero-data learning" -- meaning it can respond to questions that would normally require external knowledge that has not been provided. For example, asking the program to write "Cliff's Notes" of a text, would normally require an explanation that this means a summary. But GPT-3 has gone through enough training to understand the reference and adapt itself to produce the expected response.
Link to the paper.
This discussion has been archived. No new comments can be posted.

Can the AI Driving ChatGPT Help To Detect Early Signs of Alzheimer's Disease?

Comments Filter:
  • But after giving them my email address, I encountered a mandatory demand for a verifiable cell number. Fuck that. What makes this shadier is the fact that they demand the number, without warning, after they collect the email. I bailed, but I'm sure I'm on a list now somewhere.

    • I take it that you don't, for one, welcome our new AI overlords?
    • But after giving them my email address, I encountered a mandatory demand for a verifiable cell number. Fuck that. What makes this shadier is the fact that they demand the number, without warning, after they collect the email. I bailed, but I'm sure I'm on a list now somewhere.

      >> why ChatGPT verifies my cell number?

      It is possible that ChatGPT, a chatbot platform developed by OpenAI, requires users to verify their cell phone numbers in order to create an account or access certain features. This is a common practice in order to help ensure the security and authenticity of user accounts and to prevent fraud or abuse. It is also sometimes used as a way to verify that you are a real person and not a bot or automated program. By verifying your cell phone number, ChatGPT may be able to ensure that only genuine users have access to the platform and can help to protect the integrity of the service.

      >> does ChatGPT collect cell numbers?

      It is possible that ChatGPT collects cell phone numbers from users who provide them in order to create an account or access certain features on the platform. This is a common practice for many online services in order to help ensure the security and authenticity of user accounts and to prevent fraud or abuse. If ChatGPT collects cell phone numbers, it would likely be used for the purposes of verifying the identity of users and helping to ensure the security and integrity of the platform. It is important to note that the specific data collection and usage policies of ChatGPT are determined by the company and may vary. If you have any concerns about how your personal information is being used, you should review ChatGPT's privacy policy or contact the company directly for more information.

    • Here's a trick: go to bugmenot and try one of the accounts there for openai.com or chat.openai.com.

      They're usually very overloaded - lots of people chatting all at once hits the rate limit quickly - but if you're persistent, it would give you an idea of what ChatGPT is like.
    • by ranton ( 36917 )

      But after giving them my email address, I encountered a mandatory demand for a verifiable cell number. Fuck that. What makes this shadier is the fact that they demand the number, without warning, after they collect the email. I bailed, but I'm sure I'm on a list now somewhere.

      It is just MFA, and it is very common. I have written a number of PoCs in the past which included authentication which used Auth0 or Cognito or whatever, and they all had MFA on by default. My first functioning test of the software already was asking for a phone number to set up MFA. This is just how authentication works today for most services.

      It doesn't make much sense to write your own authentication or authentication capabilities today, and all the major vendors highly recommend MFA. Phone texts are sti

      • You make assumptions here, and rather Pollyanna-ish ones. I was not presented with a privacy policy. That leaves the door wide open. "Do what thou wilt shall be the whole of the law."

    • I bailed, but I'm sure I'm on a list now somewhere.

      I am waiting for the moment when it will be able to browse the Internet. It is not much useful for me right now. When you ask "What are the news today?" it will just reply that it can not browse the net and that the data are from 2021.

  • by EvilSS ( 557649 ) on Wednesday December 28, 2022 @11:37AM (#63163796)
    Wouldn't this blood test be the easier solution: https://science.slashdot.org/s... [slashdot.org]
    • Probably a hell of a lot more accurate and valid than some AI model making predictions from speech.
    • A physical test versus something that can be done online from your couch? This would certainly be the cheaper solution (which is the only meaningful solution in the US healthcare system).
  • Can the AI Driving ChatGPT Help To Detect Early Signs of Alzheimer's Disease?

    Maybe. It can certainly help these researchers obtain exposure through science news channels because AI excels at generating hype.

  • Too few will trust a car that sends medical data to doctors (or claimed doctors). What often happens is intermediate contractor(s) keep copies of the data for marketing or selling on the midnight market. Yes, it's often illegal, but cheaters find a way to cheat if there's big money in it.

  • ChatGPT is downright useless. Any queries that can in any way be construed as "controversial" have absolutely no data returned other than some talking point blurb about how x and y isn't relevant, and even if x is true then it is caused by socio-economic variables. So even if civilization bit the dust and this somehow survived, or if some future (or even present) researcher wanted to get a feel of anything at all about our current or past times, it wouldn't return anything useful.

    • https://bracingviews.com/2022/... [bracingviews.com]
      My task to the AI bot: “Write a critical essay on the military-industrial complex.”

      Here’s what it generated:

      The military-industrial complex refers to the relationship between a country’s military and the defense industry that supplies it, seen together as a vested interest which influences public policy. This relationship has been a source of concern for many people, who argue that it can lead to an unnecessary expansion o

  • by fbobraga ( 1612783 ) on Wednesday December 28, 2022 @12:43PM (#63163950) Homepage
    Next question
  • The one handling ChatGPT can't detect Alzheimer's, but there is evidence that an AI could be taught to do so, and such an AI could, at least in principle, work conjoined with ChatGPT, so that ChatGPT could run the conversation, and the other one could check for signs.

    I'm not really sure this would work, as, IIRC, the article saying that an AI could detect Alzheimer's said that part of what it used was timing of responses. But it might well work (for all I know).

  • Probably not (Score:5, Interesting)

    by gweihir ( 88907 ) on Wednesday December 28, 2022 @01:24PM (#63164056)

    Artificial Ignorance in the medical field has so far been mostly a disaster. For example, IBM Watson did a bit better at treatment plans than human doctors. Only problem was it occasionally killed a person and did so far more often than a human MD would have done. Overall: unusable. The project was scrapped.

    The thing is, while humans struggle with unforeseen situations, AI is completely overwhelmed by them and, worse, does not detect that and hence, were a human takes time to think or gets a second opinion, AI just does something catastrophic. For the example at hand, I expect it will overlook clear signs occasionally and hence be not fit for purpose.

    • A.I. should be a tool used to see patterns from complex sets of data, that then gives that information to humans (professionals) so they can use that information correctly along all the other information that's already been gathered.

      Blindly relying on so-called A.I. to make decisions on its own is just madness at this point in time, because it does not think. We should be calling it by its real name, which is P.R. i.e. pattern recognition.

      • by gweihir ( 88907 )

        Indeed. The only reasons these things are called "intelligent" is intentional misdirection for marketing efforts.

    • by ebunga ( 95613 )

      I'm reminded of the problem with expert systems in the first wave of AI... when it came to things like a medical diagnosis, it required a knowledgable doctor to input the correct information, and a knowledgable doctor to interpret the results. In other words, it just wasted everyone's time. Sadly, that's been the case for much of IT for over 50 years. At the end of the day, we're just really efficient at wasting time.

      • by gweihir ( 88907 )

        Indeed. A while back I read about a medieval culture that had to work two days (!) per week to really do everything that was needed. That was it. Seems we have made a huge step backwards there and, given massively increased productivity, it can really only be explained by a gigantic amount of waste.

        The specific problem of the AI community seems to be huge egos, not a lot of skill and a deep desire to somehow be perceived as heroes. These people have been massively overpromising and massively underdelivering

  • But could the A.I. driving ChatGPT help to detect early signs of Alzheimer's disease?

    • GPT-3? I don't know much about it but from my understanding you point it to the training sources. ChatGPT is an implementation of GPT-3 and apparently has been fed a lot of data to make it useful. Anyone that gets approval could make their own GPT-3 implementation. Could have a Conservative ChatGPT (the current one apparently has a liberal bias)

  • As a large language model trained by OpenAI, I do not have the ability to detect anything because I do not have any physical inputs. Signs of alzheimers may include cooking pork chops at 2am, and having alzheimers. 2+2 cannot equal 5 because 2 is a variable that has been assigned the value 2.5 and normally a variable holds a value, therefore, 2+2=7. I do not have a name. Please call me Steve.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...