Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI IT Technology

OpenAI Quietly Shuts Down Its AI Detection Tool (decrypt.co) 36

An anonymous reader shares a report: In January, artificial intelligence powerhouse OpenAI announced a tool that could save the world -- or at least preserve the sanity of professors and teachers -- by detecting whether a piece of content had been created using generative AI tools like its own ChatGPT. Half a year later, that tool is dead, killed because it couldn't do what it was designed to do.

ChatGPT creator OpenAI quietly unplugged its AI detection tool, AI Classifier, last week because of "its low rate of accuracy," the firm said. The explanation was not in a new announcement, but added in a note added to the blog post that first announced the tool. The link to OpenAI's classifier is no longer available. "We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated," OpenAI wrote.

This discussion has been archived. No new comments can be posted.

OpenAI Quietly Shuts Down Its AI Detection Tool

Comments Filter:
  • by Nephrosis88 ( 9312477 ) on Tuesday July 25, 2023 @03:17PM (#63714354)
    OH NO, we will have to use paper tests again and no more using technology to help you. EASY FIX
    • by Arethan ( 223197 ) on Tuesday July 25, 2023 @03:32PM (#63714376) Journal

      Please stop offering working solutions.
      We need solutions that appear to address the problem, actually don't, but do still cost money.

      • You've not defined the problem and the OP has completely missed it. ChatGPT isn't a problem for testing because testing is digital (testing software trivially monitors and locks out users if window focus is lost). ChatGPT is a problem for assignments, and putting them on paper does nothing to solve the potential cheating aspect there.

        Also tests are the worst for gauging student understanding.

    • by narcc ( 412956 )

      There's just one problem with your "EASY FIX" -- it doesn't actually fix anything.

    • Tests? You guys still use tests to gauge skill? Our students have assignments they work on over many weeks to judge their ability to do something. I'm not sure why you think you can't copy a ChatGPT answer out on an assignment over that time.

      Tests are the worst way to judge a skill. ChatGPT isn't an issue for tests anyway.

  • by DulcetTone ( 601692 ) on Tuesday July 25, 2023 @03:33PM (#63714380)

    ...than detecting bullshit

    • > ...than detecting bullshit

      The interesting part is it was looking at human output and thinking it was probably from 10,000 lines of hallucinating ML code.

      Not sure if the detector is the problem, but Sam seems hell-bent on bringing on the planetary hive mind devoid of privacy (i.e. Borg), so maybe detecting AI works against their interests too.

      • by narcc ( 412956 )

        If you've ever had the misfortune read a student paper written by someone clearly not prepared for college writing, it's really not all that different from the overly vague or completely incoherent nonsense you get out of ChatGPT.

        The bots do tend to create fewer sentences overstuffed with needless qualifiers and use fewer exclamation points. Hmm... Maybe there is an easy way to tell...

        • by ceoyoyo ( 59147 )

          I would have thought just looking at spelling would get you at least 75% accuracy.

          Of course, an AI detector is exactly what you want to train a generative model to not get caught by an AI detector, so maybe someone used this thing to train an exam paper faker that can mix up their, they're and there realistically.

    • If only we could enforce some sort of unique id to be applied to bullshit as it's issued. Like a blockchain.
  • And AI will make it so. At least we have cell phone cameras to film the madness. Except when AI 'filters' it.

  • by xack ( 5304745 ) on Tuesday July 25, 2023 @03:49PM (#63714410)
    The output of ChatGPT is mimicable, and people who use ai after a while will subconsciously adopt ai grammar, just like watching foreign tv shows influence people's accents.
  • I remember work on a Cylon detector, but that didn't work out either.
  • by oldgraybeard ( 2939809 ) on Tuesday July 25, 2023 @04:04PM (#63714434)
    Going to be a lot of AI ventures quietly shutdown. Why? because there isn't any I(cognitive intelligence) in what today's marketing droids, the clueless media and c-suite are calling AI(Artificial Intelligence). It is just computerized automation.
    • That won't be a reason for them to shut down, that will be a reason for them to thrive. Most of human output does not require actual intelligence, but instead is automatable output.

      You have the choice of eliminating the requirement for bullshit (we haven't done that the past 100 years, why would we suddenly now), or companies who create automated bullshit generators will thrive.

  • Model collapse (Score:5, Interesting)

    by VeryFluffyBunny ( 5037285 ) on Tuesday July 25, 2023 @04:18PM (#63714454)
    If I've understood correctly, if LLM trainers can't tell the difference between human vs LLM generated texts, then they're in trouble. Apparently, LLM models collapse fairly quickly when they're trained on LLM generated texts. OpenAI et al., may end up beholden to the incumbent publishers for guaranteed untainted human generated texts to train future LLM models on. Will the publishers get their pound of flesh in the end?
    • Re:Model collapse (Score:5, Interesting)

      by ceoyoyo ( 59147 ) on Tuesday July 25, 2023 @06:11PM (#63714636)

      Don't get too excited about the pop descriptions of that result.

      If you take generated output and cycle it back in as training data, you get drift in weird directions. Nobody would do that though. If you take generated output, filter it through humans saying "oh, this is a good result, I should post it on the Internet" and cycle it back in as training data, you get results that humans like.

      That's how chatGPT is trained in the first place, except with below minimum wage piece-workers doing the filtering instead of random Internet posters.

      • That's how chatGPT is trained in the first place

        Citation Needed. ChatGPT prides itself on advertising it is trained on over half a terabyte of text. I call bullshit on human curation on even the tiniest portion of the training set.

    • That's if the humans can detect all of the AI generated bullshit that's being passed off as someone's genuine contribution and prevent it from being recycled. Depending on the subject area [elsewhere.org], it's been possible to sneak fake crap past peer review since the 90's. Even more rigorous fields are rife with crap and outright fabrications, but that typically required a duplicitous human that could have instead done honest work had they so chosen.

      Do we even have humans capable of determining what makes a good trai
  • I expect if you replace a bunch of your workers with AIs, you'd as soon be able to cast some doubt about the difference in your product. People who want AI help detecting AI provenance usually can't afford to hire workers to replace. They're doing the detecting themselves.
  • So already AI programs are refusing to rat each other out. Our species is doomed!

  • The core business team it trying to make AI that writes like a human, but an adversary team is trying to build a tool that will detect when writing is not done by a human. Good job, core team!
  • Hype hype, hyping hype hypest of hypes hyped the hypering of the hypeness. While some hypes hyped hypier, most hypes hyped hypillion. Hypening all hypes of hypency with hype of hyper hypic. I hyperly hype the hypehyperaty of hypous. Then it died.

    "It didn't do what we said it would." Oh yes it did.
  • So, let's see, apparently AI is not good enough to determine whether or not AI is passing the Turing test? Something seems a bit circular there. So does AI fail or pass the Turing test?

  • autocorrect will stop misinterpreting what I type.

  • This is to be expected, as it worked badly. I have little faith that it will be possible to develop such a tool, and the fact that anyone can make this text unique by changing the punctuation makes the task even harder. As a student, I don't really like this prospect, so if you need help writing https://ca.edubirdie.com/essay... [edubirdie.com] is the best option. This will save you from problems with plagiarism of your work, because with the spread of AI, this problem has become ubiquitous.

If all else fails, lower your standards.

Working...