Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

The Anti-ChatGPT Appears? Researchers Fights Back With 'DetectGPT' (neowin.net) 59

To detect AI-generated text, Stanford researchers are proposing a new methodology "that leverages the unique characteristics of text generated by large language models (LLMs)," reports the tech-news site Neowin: "DetectGPT" is based around the idea that text generated by LLMs typically hover around specific regions of the negative curvature regions of the model's log probability function.... This method, called "zero-shot", allows DetectGPT to detect machine written text without any knowledge of the AI that was used to generate it....

As the use of LLMs continues to grow, the importance of corresponding systems for detecting machine-generated text will become increasingly critical. DetectGPT is a promising approach that could have a significant impact in many areas, and its further development could be beneficial for many fields.

The article also includes its obligatory amazing story about the current powers of ChatGPT. "I asked it how to build an obscure piece of Linux software against a modern kernel, and it told me how. It even generated code blocks with the bash commands needed to complete the task."

Then to test something crazier, Neowin asked ChatGPT to generate "a fictional resume for Hulk Hogan where he has no previous IT experience but wants to transition into a role as an Azure Cloud Engineer.

"It did that, too."

Thanks to Slashdot reader segaboy81 for sharing the story.
This discussion has been archived. No new comments can be posted.

The Anti-ChatGPT Appears? Researchers Fights Back With 'DetectGPT'

Comments Filter:
  • by ffkom ( 3519199 ) on Sunday January 29, 2023 @04:43PM (#63249429)
    Following the link to what should be the original source, there is only an abstract making some claims.

    Also, I miss any mention of a false positive rate. Even if DetectGPT did yield a positive test on 95% of all machine generated texts, that would be no achievement if its algorithm also wrongly accused many manually written texts of being authored by GPT.
    • Following the link to what should be the original source, there is only an abstract making some claims.

      Also, I miss any mention of a false positive rate. Even if DetectGPT did yield a positive test on 95% of all machine generated texts, that would be no achievement if its algorithm also wrongly accused many manually written texts of being authored by GPT.

      Furthermore, note that humans learn by imitation. Kids who grow up using ChatGPT as part of their learning process (answering questions, summarizing things the kids want to know about, &c) will tend to learn the patterns that ChatGPT uses.

      And in any event you know what will happen: big online systems such as social media will use this as an automated method to ban users, it's 95% accurate, and the unfairly banned users can just go through the adjudication process to get their accounts unbanned, because

      • Learning from ChatGPT is like finding random people and asking them questions about random fields of knowledge. You cant expect more from something that learned from random people about random fields of knowledge.
    • I think all these things might end up a bit like polygraphs. Alone not exactly evidence you ought be allowed to "convict" with, but as an investigative tool, an extra datapoint. The problem is if its the *only* investigative tool.

      However my suspicion is that if you apply a bit of Foucalt to the scenario you can see a more effective use. Just *telling* people that your using an advanced algorithm that can detect it, means that people know theres a high probability they will get caught. At that point you dont

    • Well, a false positive is a metric that tells you that your text sounds like written by an AI. Probably low on facts, high on filler words and phrases. Probably like a clickbait article that won't tell till the last paragraph what it is actually about. So even if your text wasn't written by an AI, we probably won't need to worry that we just rejected the next Pulitzer nominee.

    • Texts that are high in filler phrases and words, and low on actual factual data ... ...just auto fail all the marketing and advertising students ...

  • The Ferbies were toys in the late 1990's that would learn some words from your language and reproduce it.

    Some people feared they toy could be used to collect Intel as it seems that it had the ability to learn language.

    ChatGPT is a handy tool, but it will also lie to you. Give you false information or in the wrong context. If I were to ask it to give me an Essay on Abraham Lincoln, I would have to check sources and do research anyway to prove it's validity. If I were to say it would be an essay.

    Sure kid

    • by Dwedit ( 232252 )

      Furby did not learn language and reproduce it, any more than a Speak and Spell could read words you typed in.

      Both were static dictionaries of words.

      • This is just like the “no one is going to just have a calculator on them so fast hand computation is a necessary skill” i got growing up in the 80s. Now everyone has a supercomputer in their pocket and only a tiny fraction of a sliver is even needed for hand calculations, knowing the base principles and how to apply them to real life problems/situations are what’s important. ChatGPT takes away the boring work of creating a rough outline and can offer helpful suggestions but is terrible a
        • by Anonymous Coward

          Now everyone has a supercomputer in their pocket

          And when they have to perform some trivial calculation, they not even know how to use the "calculator" app on their supercomputer. In fact, most of them don't even fathom that everyday problems like how many floor tiles they need for their bathroom could be solved with a couple of simple calculations instead of a google search.

          knowing the base principles and how to apply them to real life problems/situations are what’s important.

          keep telling yourself tha

    • ChatGPT is a handy tool, but it will also lie to you. Give you false information or in the wrong context.

      ChatGPT was originally an unbiased tool, but the authors have modified it - explicitly, and based on rules they enter and not data scraped from the net - so that certain topics cannot be discussed.

      The obvious topic that can't be discussed is Nazis. It's important to know both sides of any argument, so showing the positive points of fascism would allow reasoned debate on the issue. It would allow debaters to form arguments against the positive points ahead of time.

      ChatGPT no longer will give the positive poi

      • For those interested, this tweet [twitter.com] shows the results about fascism from before the change. It's well written and informative.

        This tweet [twitter.com] shows the more recent results, after the specific changes to ChatGPT were made.

      • by nagora ( 177841 )

        ChatGPT was originally an unbiased tool, but the authors have modified it - explicitly, and based on rules they enter and not data scraped from the net - so that certain topics cannot be discussed.

        Do you have a citation for that? I was saying this to a colleague the other day but I can't put my finger on where I originally heard it.

        • I wouldn't say that ChatGPT was ever unbiased, there's no such thing — the training corpus is biased in a variety of ways, for example. And you don't want to be free from bias, because for example you want to be biased in favor of verifiable facts.

          Who was allowed to even talk to ChatGPT was carefully constrained in the early days. Now unless you are very tricky it is careful to for example oppose violence.

    • Sure kids may use it to get their homework done, however I am sure most school teachers would be able to see if something is not quite right about it.

      Yes, but no one wants to pay teachers. And teachers don't want to give "I just could see that something wasn't right" when they get dragged to court by rich parents whose brats didn't make it to Harvard cause they only got a "B"

      So also for rejecting AI generated essays, we want to have reproducible, quantifiable, measurable and of course automatable processes.

  • But won't human-generated BS fall into the same trap?
    Chat GPT's biggest threat seems to be its potential for generating BS just as good as a frighteningly large portion of humans do, with the potential to put them out of work.

    • Yes, it passed the final exam at Wharton, is incapable of human empathy, makes no rational sense, is prone to being verbose and arcane, and will always pick the best value for shareholders so all CEOs are out of a job. Think of the shareholder dividends!
  • Banning it is as stupid as banning pocket calculators, They should be teaching how to verify and leverage its use

    • Comment removed based on user account deletion
      • What happens, for example, when Anti-Chat flags something you wrote with no AI help as AI-CREATED? Is there recourse? An Anti-Chat Court of Appeals? And how do you prove its exclusively your work unless you have a camera rolling filming you writing or creating whatever it was

        This is simple to address. The best chatbot or other highly complex algorithm will always need the training data and vast computational resources for beyond the foreseeable future making them only fall into a few companies hands. Good luck running the training and back end of thatGPT in a portable phone anytime soon if ever. Declare unauthorized access to anything other than those few systems a felony. Then mandate a record exists of all algorithm assisted actions and you can thus be found guilty or not

      • Both the Empire and the Rebel Alliance condoned slavery ....

    • by HiThere ( 15173 )

      That really depends on what the class is supposed to be teaching.
      Yes, eventually AI should be a classroom tool, but not until you can trust it to be honest...unless you're teaching how to work around that kind of problem.

  • "it told me how" (Score:4, Insightful)

    by iMadeGhostzilla ( 1851560 ) on Sunday January 29, 2023 @05:40PM (#63249499)

    "I asked it how to build an obscure piece of Linux software against a modern kernel, and it told me how. t even generated code blocks with the bash commands needed to complete the task." ...and? Did it work?

    "it generated a fictional resume for Hulk Hogan where he has no previous IT experience but wants to transition into a role as an Azure Cloud Engineer. It did that, too"

    The utility -- and verifiability -- of that result is exactly zero.

    So far the only real utility of chatGPT I was able to see for myself was asking it to describe when and how to use different phrases in the English language, but even for that I had to watch out for "AI hallucination."

    • by 93 Escort Wagon ( 326346 ) on Sunday January 29, 2023 @06:02PM (#63249539)

      "I asked it how to build an obscure piece of Linux software against a modern kernel, and it told me how. t even generated code blocks with the bash commands needed to complete the task." ...and? Did it work?

      Given it was - almost certainly - just repackaging one or more upvoted responses from Stack Overflow... one would assume it'd work.

      • Given it was - almost certainly - just repackaging one or more upvoted responses from Stack Overflow... one would assume it'd work.

        +1

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Given it was - almost certainly - just repackaging one or more upvoted responses from Stack Overflow... one would assume it would not work.

        There, fixed it for you.

      • by jrumney ( 197329 )

        One would assume that it worked when the answer was written for the exact question that was being answered. Whether it still works and for the question you are answering is a different matter.

        People who expect AI to completely replace human decision making are going to be disappointed. People who dismiss or ignore AI are probably going to lose jobs to it. AI is a tool, and in the hands of someone who can properly evaluate the usefulness of its suggestions will be a big productivity boost.

    • "it generated a fictional resume for Hulk Hogan where he has no previous IT experience but wants to transition into a role as an Azure Cloud Engineer. It did that, too"

      The utility -- and verifiability -- of that result is exactly zero.

      I don’t know, id think Mr Hogan would make for a pretty badass azure cloud engineer. if chatGPT could help him who am I to deny him realizing his dream job.

  • by l810c ( 551591 ) on Sunday January 29, 2023 @06:13PM (#63249559)

    I told my wife about this a few weeks ago. She is a High School Literature/English teacher. She was mildly interested, maybe slightly skeptical. And then last week she had 2 students submitted ChatGpt generated papers. She texting me from school in the middle of the day and OMG, is the thing you were telling me about a couple of weeks ago. YEP! She spotted it, but was amazed that the 'thing' I was telling her about really happened. The students were disciplined, required to submit their own papers and the entire school warned not to try that crap again.

    • How did she know? Was it babble, just wrong, too good, something else?

      • by l810c ( 551591 )

        It just seemed Robotic and the vocabulary beyond what those students would normally use. My boss at work wife is a teacher and has also experienced this. He said there is another tool, which I can't actually find now that mildly obfuscates Chat GPT generated text.

        • by iAmWaySmarterThanYou ( 10095012 ) on Sunday January 29, 2023 @09:27PM (#63249887)

          Interesting. Thanks for reply.

          My one experience was someone using Chatgpt to reply to me on here. It was oddly off topic while still sort of talking around the topic. Definitely not responsive to what I said. I didn't know yet it was Chatgpt but I took the reply to mean the other person didn't really understand the technical details of where I was coming from so I said, "Trying again" and repeated what I said but very dumbed down. That Chatgpt reply made me sort of dog head tilt when I first read it.

    • I suspect we are in a very small window where human detection is possible.

      Ultimately I suspect educators are going to have to abandon the long form essay as a tool for measuring understanding, and rely more on testing or (gasp) direct personal evaluation/discussion.

  • There are always people that will try to stop progress because they don't want to adapt to the changing world. I get it. Change can be uncomfortable and scary, especially if you just need a handful of years before you are done with the rat race.

    • Positive progress? Or a crutch for society?

      Progress can go both ways.
    • by jrumney ( 197329 )

      I don't think this is necessarily about stopping progress. There are situations (such as school classwork, academic research papers etc) where you need to know whether something is coming from an AI program rather than a human, and this can detect those without getting in the way of real useful use of the technology.

  • ït älwäys wörks lïkë ä chärm!

  • Seems so obvious it's likely just an apt install away
  • So they can detect a chatGPT bot, what about the next 1000 iterations?

  • by dyfet ( 154716 ) on Monday January 30, 2023 @06:48AM (#63250443) Homepage

    Some argue that students using ai is no different than calculators, but they are not the same...

    The marginal benefit to society of every student being required to manually calculate, say the square root of 1.579, really approaches 0. The marginal benefit of manually writing essays on their own is near 100%.

    Or, of course, we could become the kind of society that has to steal Spock's brain to keep things running ;)...

    • I don't disagree too much, but you're cherry-picking a bit. Give students calculators, and they'll also use them to calculate 13x12. Do you still think there's no utility in that?

      Chat-GPT style AI synthesis has a lot of good and bad uses, like a calculator. But Chat-GPT, if used without restraints and lazily, will dumb us all down far faster than the calculators would ever do, especially on the delegation of critical thinking.
  • by chas.williams ( 6256556 ) on Monday January 30, 2023 @08:23AM (#63250575)
    Just ask "Was this text written by ChatGPT?"
  • by Glasswire ( 302197 ) on Monday January 30, 2023 @01:12PM (#63251267) Homepage

    If it can't right now, it should be possible to make ChatGPT add a hidden embedded cryptographic signature into the body of the text that would let it be identify it as CGPT sourced and identify the person that requested it. Ideally, even fragments of broken code after editing should make it forensically possible to identify CPGT text.

Marvelous! The super-user's going to boot me! What a finely tuned response to the situation!

Working...