Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Releases Tool To Detect Machine-Written Text (axios.com) 34

An anonymous reader quotes a report from Axios: ChatGPT creator OpenAI today released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine. OpenAI cautions the tool is imperfect and performance varies based on how similar the text being analyzed is to the types of writing OpenAI's tool was trained on. "It has both false positives and false negatives," OpenAI head of alignment Jan Leike told Axios, cautioning the new tool should not be relied on alone to determine authorship of a document.

Users copy a chunk of text into a box and the system will rate how likely the text is to have been generated by an AI system. It offers a five-point scale of results: Very unlikely to have been AI-generated, unlikely, unclear, possible or likely. It works best on text samples greater than 1,000 words and in English, with performance significantly worse in other languages. And it doesn't work to distinguish computer code written by humans vs. AI. That said, OpenAI says the new tool is significantly better than a previous one it had released.

This discussion has been archived. No new comments can be posted.

OpenAI Releases Tool To Detect Machine-Written Text

Comments Filter:
  • "It has both false positives and false negatives," OpenAI head of alignment Jan Leike told Axios, cautioning the new tool should not be relied on alone to determine authorship of a document.

    So it's as useful as a coin flip?

    • by Entrope ( 68843 ) on Tuesday January 31, 2023 @07:20PM (#63255367) Homepage

      No. [wikipedia.org]

      The existence of both type I and type II errors does not equate to a 50-50 chance of being correct. It could be better than a coin flip. It could also be worse, in one sense, although then you should assume it is wrong and it would become "better" than the coin flip.

    • It's as useful as the bayesian classifier in Apache SpamAssassin. It gives a grade, 1 to 5. It's not perfect but it raises flags to help a human make a decision, together with other tools and also common sense. If a particular student gives high grade according to this tool when doing homework, and not in formal exams, you can start wondering.

      • This way students can pre-check their AI generated answers prior to submitting them.
        This is the equivalent of running your malware through a fuzzer and then asking the major anti-virus softwares to tell you if they still recognize it as harmful.

        • It will catch student who use ChatGPT to write an essay and not change the sentences. These are not the brightest students and they probably will not use a detection tool. Because then they need to edit the answers until it passes detection, and that gives work. If they were the kind of students interested in work, they would not copy-paste answers from the internet to begin with.
          Also, before these AI tools, some students delivered answered they copied word to word from internet, while a plagiarism detectio

        • by EvilSS ( 557649 )
          Yep, this is why access to it should be restricted. Teachers, law enforcement, HR departments, only people who need it for verification and not people who could use it to cheat.
          • by unrtst ( 777550 )

            Right, because teachers, law enforcement, and HR don't lie or cheat or have a use for this. SMDH

            Teachers are usually students as well.
            HR will also need to apply for other HR jobs.
            Law enforcement.. do we really need to go into how shady that can be?

            IMO, use of it should be fine if the answer is correct.
            Want to ensure they can compose an essay? Have them write it in class. In the real world, people will be using this and similar tools to assist their writing.

            • by EvilSS ( 557649 )
              Strict auditing and not making it free, make it $10,000 per verification, that can only be charged to a corporate/government account, should fix most of that.
    • how can you make such a comment without any numbers...

  • by NobleNobbler ( 9626406 ) on Tuesday January 31, 2023 @07:29PM (#63255393)

    .. to write something that it could not detect was written by itself?

    • by Tablizer ( 95088 )

      That's the problem: any digital fake-detector can be used to train a generative bot to make detection-proof fakes. It's a cat-and-mouse game all the way down to the turtles.

      • by mark-t ( 151149 )
        The real problem is that most people have no fucking ethics when they perceive that an action would provide an immediate advantage to themselves for little work and they do not happen to perceive anyone else being hurt by it... and sometimes even the latter doesn't matter that much to them.
        • Their absence of ethics reflects the fact that they haven't bought into why they are there. They're there because it's the next thing to do and to have a good time - that was certainly my attitude. In that context cheating on essays is makes sense.

          What this whole fiasco demonstrates is that our education system is in practice a set of hoops to be jumped through, not a means of actually educating people, at least in most cases. Can we improve matters? Can we expect rationality from teenagers? (Now there's a

    • by narcc ( 412956 )

      Nothing special. Remember that there is no actual understanding here.

    • by AmiMoJo ( 196126 ) on Wednesday February 01, 2023 @07:55AM (#63256389) Homepage Journal

      What happens when kids use ChatGPT to draft their homework, then re-write it a little? And after a while start to adopt the mannerisms of ChatGPT, until even their original work becomes indistinguishable from it?

      • And then ChatGPT trains off itself!

        Come to think of it-- is training off your own output a normal thing in AI? It seems like it could be

  • Let the AI arms race begin!

  • Oh come off it, this must be trivial to defeat with a .txt dictionary/thesaurus and 10 lines of Python.
  • by PPH ( 736903 ) on Tuesday January 31, 2023 @07:54PM (#63255435)

    1. Dos ChatGPT throw in the occasional spelling and gramatical error?

    2. What would this tool do if someone pasted the output of the Slashdot editing staff in? Melt down?

  • Writing text that will be mistaken for AI-generated.
  • Does that mean that you'll always trigger the ChatGPT score? Meaning that the more popular you are... and the more likely your material is used as fodder for the machine, the more likely that you'll be accused of ChatGPT-assisted plagiarism... of yourself?

  • What recourse does a student have who is falsely accused of using AI to write a paper? A flawed tool in the hands of an idiot educator operating under a flawed policy could destroy a student who did nothing wrong. It would be interesting to feed the tool old term papers which have never been scanned and were written in the 60's to see if a false positive could be generated.
  • I tried it with the final paragraph of this story, it returned that it needed at least 1,000 characters. I foresee a lot of people asking ChatGPT to write content in less than that lower limit
  • I created a social media post with it to demonstrate its conversational abilities (I had it recommend places to visit on the East Coast of the US that had notable geologic formations.) It did a very nice job, and was quite polite actually, as one of my friends noted -- a nice relief, she thought, from the typical tone of social posts. The OpenAI classifier labeled this post as highly likely to have been generated.

    I then ran another social media post of similar levels of detail and tone through the classifie

"If the code and the comments disagree, then both are probably wrong." -- Norm Schryer

Working...