Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Grapples With Unreleased AI Detection Tool Amid Cheating Concerns (msn.com) 27

OpenAI has developed a sophisticated anticheating tool for detecting AI-generated content, particularly essays and research papers, but has refrained from releasing it due to internal debates and ethical considerations, according to WSJ.

This tool, which has been ready for deployment for approximately a year, utilizes a watermarking technique that subtly alters token selection in ChatGPT's output, creating an imperceptible pattern detectable only by OpenAI's technology. While boasting a 99.9% effectiveness rate for substantial AI-generated text, concerns persist regarding potential workarounds and the challenge of determining appropriate access to the detection tool, as well as its potential impact on non-native English speakers and the broader AI ecosystem.
This discussion has been archived. No new comments can be posted.

OpenAI Grapples With Unreleased AI Detection Tool Amid Cheating Concerns

Comments Filter:
  • by luvirini ( 753157 ) on Monday August 05, 2024 @09:18AM (#64681788)

    How could it work on text generated by other AI tools?

  • by classiclantern ( 2737961 ) on Monday August 05, 2024 @09:28AM (#64681836)
    Sell the antidote for more than the poison.
  • I predict that any system that attempts to detect and reject the work of what we're calling AI here will just get effectively subsumed into the target system itself, as part of its "adversarial" dataset.
  • challenge of determining appropriate access to the detection tool, as well as its potential impact on non-native English speakers and the broader AI ecosystem.

    Why would access be such an issue? If you are worried about students using the tool to check their work ahead of time, that's sortof the point? If this actually does work the recourse will be "add in enough of my own original work to pass the tool".

    If people develop workarounds then you have to work around those workarounds in the software. If the worry is about an AI vs AI-Detection arms race, well too bad, we're already in one.

    The second one is a total nonissue because I doubt anyone would care if you use and AI tool for translation. If it's not used for translation, see above.

    Maybe I am being cynical but I see the hesitancy in releasing this is the impact on it's bottom line as it seems to me a large, (overly large I would say) profit area for AI is basically deception, have people beleive they are communicating with a human when they are not.

    Some form of AI registration or detection should be legislated IMO. It should be illegal for people to not be informed when they are working with AI versus a person.

  • ftfy (Score:3, Interesting)

    by Anonymous Coward on Monday August 05, 2024 @09:42AM (#64681882)

    "we have this tool to identify AI content... but we're kinda scared to let you see how much AI content has already taken over..."

  • Honest question!

    It's just like other tools, right? Sure you must not use it in exams where you are not allowed to use any help or e.g. a calculator, but thats not the problem imho.

    What's the genereal difference to other tools like spell checkers, syntax checkers, or even compilers that proove your code?

    It's a tool available to everyone, and thus it should be learned to be used correctly.

    If you simply use AI at its current state, and don't check the references, than your work might be wrong and you don't pas

    • by zlives ( 2009072 )

      the same logic would apply if i used another student to write my paper for me. anything i can buy/pay for is a tool for my use.

      • Ok, you got a point here.

        But imho AI is more similar to a tool than a human beeing. Exams (mostly) have to be done by yourself only. So the border between tools like compilers and AI to humans is quite clear. I don't understand why the border should be between compilers and AI. I don't see a way to distinguish between all the tools that exist, do you draw the border before or after spell correction? before or after using google? before or after template engines? before or after ChatGPT 3 or 4?
        The boarder to

        • The big difference is that LLMs make a superficial impersonation of a knowledgeable person, meaning mistakes are much harder to see than with other tools, while making superficial competence easy to fake. So for example, someone passing their driver's test with the help of a spell-check is fine, someone passing their driver's test with the help of a LLM is a danger. Same for interviews, which are essentially an informal test.

          An alternative is redesigning all such tests to take into account available tools,

          • one method I have seen is using time-limited complex questions. Like 0.5-2 minutes top. You either know the answer or you dont. Most LLM output is just too slow.

            • by flink ( 18449 )

              Lots of people are that slow. I'm a pretty slow coder, for example, but I tend to ship fewer bugs than a lot of my peers, so it's pretty much a wash. I'll frequently stay out of verbal debates or discussions because my recall and reasoning takes too long to keep up but I'll be fine in an online forum.

              • I am pretty fast and it was too fast for me. I had like 63% and I was aghast. The recruiter was super excited because that was a high score for them.
                Stuff like here is a shell script, given the following inputs, does the script run or is there an error ?

  • Watermarking is a must for AI generated content. Otherwise we will all drown in a sea of misinformation. Specially for images/video/audio.
    • Watermarking is a must for AI generated content. Otherwise we will all drown in a sea of misinformation. Specially for images/video/audio.

      Relying on bad guy to set evil bit is a bad idea.

      • I mean, that's how all laws work. We require food producers to accurately label their products, other manufacturers to follow required standards, drug manufacturers to follow an established approval process, citizens to follow laws, etc.
        • I mean, that's how all laws work. We require food producers to accurately label their products, other manufacturers to follow required standards, drug manufacturers to follow an established approval process, citizens to follow laws, etc.

          With food and manufacturing there are explicit known chains of trust and liability. This necessarily doesn't exist for speech. Speakers do not require a license or prior approval to speak and don't even have to disclose their identity.

        • by Sique ( 173459 )
          Not exactly. Especially with food, there is much more to legislation than just declaration of the ingredients. For instance, there are lots of stuff you are not allowed to sell in the first place, even if you would declare it, like unwashed eggs or non-pasteurized milk. Both foods are not inherently dangerous for consumption, and can happily be sold in other parts of the world without adversarial effects. In the E.U. for instance, it is forbidden to sell washed eggs - exactly the opposite to the U.S..
  • Students could simply have one of a bazillion other LLMs rewrite OpenAIs response or better still just use a different model.

  • Pipe the OpenAI output round-trip through two independent language translators, e.g. Google Translate English->Spanish then someone else's translator Spanish->English.

    The results will look horrible but a native-speaker looking at the crappy round-trip-translation could re-phrase the "crappy back to English round trip translation" back to coherent/native-speaker-sounding English, then compare it to the original OpenAI output to make sure the meaning stayed the same.

    If the goal is to prevent people from

  • Just copy/paste the output to a text editor and I would think their "watermarking" would not paste in. Then just copy/paste that text into whatever you want to plagiarize.
    • watermarking technique that subtly alters token selection in ChatGPT's output, creating an imperceptible pattern detectable only by OpenAI's technology

      Just copy/paste the output to a text editor and I would think their "watermarking" would not paste in.

      If I understand "alters token selection" correctly, the very words of the output will be selected to include subtle patterns, such as a slightly-higher or -lower-than-typical use of 6-letter words or words beginning with a vowel, unusually high or low use of adverbs, or some other pattern or combination of patterns. For output of "scientific data" we may be talking about encoding the watermark in least-significant-digits or in the choice of color or line spacing of a graph or table.

      tl;dr: Cutting and pasti

  • The journey of AI detection is a captivating exploration into the intersection of technology, ethics, and education, where innovation meets responsibility. OpenAI's sophisticated anticheating tool, with its groundbreaking watermarking technique, exemplifies the incredible potential and complex challenges of identifying AI-generated content. As we navigate this evolving landscape, the balance between safeguarding academic integrity and ensuring fairness for non-native English speakers becomes paramount. This
    • by davidwr ( 791652 )

      The journey of AI detection is a captivating exploration into the intersection of technology, ethics, and education, where innovation meets responsibility. OpenAI's sophisticated anticheating tool, with its groundbreaking watermarking technique, exemplifies the incredible potential and complex challenges of identifying AI-generated content. As we navigate this evolving landscape, the balance between safeguarding academic integrity and ensuring fairness for non-native English speakers becomes paramount. This journey is not just about technology but about understanding its impact on society, promoting transparency, and fostering a world where AI and human creativity coexist harmoniously. The path forward requires thoughtful deliberation, open dialogue, and a commitment to ethical principles as we embrace the possibilities and responsibilities of AI detection. ..--..--..

      Q: How can you tell if something is written by a flesh-and-bone marketing droid or a silicon-based marketing droid?

      A: I don't know either.

  • by trawg ( 308495 ) on Monday August 05, 2024 @05:36PM (#64683644) Homepage

    The only reason they'd hesitate on this is because they have customers that don't want their users to know they're speaking to a robot. They already deliberately disabled the "ignore previous instructions" Voigt-Kampff test and they're dangling this to try to pretend they're going to do the right thing.

    But the reality seems to be their paying customers don't want this feature - they want to have the freedom to deceive end users into thinking they're talking to a human, or reading human output.

    So, just by complete coincidence, ChatGPT becomes a really useful tool for spammers and social media disinfo bot creators, while at the same time allowing giant corporates to pretend they have a human customer service department, when in reality it's just another tool to frustrate you into giving up.

  • I get the concerns around AI detection tools. A friend of mine was stressing out because her professor thought her essay was AI-generated. She's not even a native English speaker, so it felt unfair. She ended up using assignment writing service uk [theassignm...viceuk.com] to help her craft something that wouldn’t get flagged. But it’s crazy to think how these tools can impact students. What if they start penalizing genuine work more often?

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...