Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

College Student Made App That Exposes AI-Written Essays (polygon.com) 48

An anonymous reader shares a report: ChatGPT's artificial intelligence generated dialogue has gotten pretty sophisticated -- to the point where it can write convincing sounding essays. So Edward Tian, a computer science student at Princeton, built an app called GPTZero that can "quickly and efficiently" label whether an essay was written by a person or ChatGPT. In a series of recent tweets, Tian provided examples of GPTZero in progress; the app determined John McPhee's New Yorker essay "Frame of Reference" to be written by a person, and a LinkedIn post to be created by a bot. On Twitter, he said he created the app over the holidays, and was motivated by the increasing possibility of AI plagiarism. Further reading:
1. OpenAI is developing a watermark to identify work from its GPT text AI;
2. OpenAI's attempts to watermark AI text hit limits;
3. A metadata 'watermark' could be the solution to ChatGPT plagiarism fears.
This discussion has been archived. No new comments can be posted.

College Student Made App That Exposes AI-Written Essays

Comments Filter:
  • The problem with AI is that if it is writing an essay, it has no clue what it is actually doing.

    In my opinion, everything it has spit out is basically reiterating on the prompt over and over again, basically saying the same thing two or three times and then making an awkward jump into adjacent subject and repeating that. Any human should be able to detect a large body of text being generated by AI, if they actually read the text.

    The problem is teachers donâ(TM)t read essays, they could just put it in a

    • The problem with AI is that if it is writing an essay, it has no clue what it is actually doing.

      To be fair, "s/AI/most students/" would probably also be accurate.

    • by mark-t ( 151149 )
      That sounds a lot like the kinds of writing I handed in for assignments in junior high and high school, If I'm going to be perfectly honest.
      • That sounds a lot like the kinds of writing I handed in for assignments in junior high and high school, If I'm going to be perfectly honest.

        "Or as the indians called it, 'Maize'".

    • You post mentions testing for repetitiveness to detect AI, but you touch on the idea of repetitiveness 3x in 4 paragraphs. Obviously there was nothing wrong with your post, it was probably written yourself by hand and not an AI unless you were trying to make some sort of point, that being said it does not pass your own testing criterion when I try to apply them against it. I would argue that a student writing succinctly would actually look odd. Most students are given a subject to report on which they ar
      • People don't write succinctly because we train at school not to. If a teacher sees a short essay then they will assume the student didn't put much work into it. The level of knowledge needed for a high school essay isn't that great so as a student you are forced into going describing the subject in excruciating detail stating the obvious to make it look like you did some work. Unfortunately this extends into life after school, where you are forced to read way too much to get basic information.

        For example ha

        • Very true, although the issue with recipes is due to the flaws of current search algorithms and the SEO to leverage them. If Google sees that you are going to a site and without evidence of you engaging it'll fare worse than those that make you read n scroll. Boring the reader with your life's story of how your grandma made this dish and it's been passed down and yadda yadda will boost you closer to the top results while all the good recipe sites fall to the abyss of the Nth page
    • Guruevi, it sounds like you might have a bit of a chip on your shoulder when it comes to AI. While it's true that there have been some instances where AI models have produced text that is less than ideal, it's a little unfair to say that AI has "no clue what it is actually doing" and can only "reiterate on the prompt over and over again." I mean, come on! AI has come a long way in recent years and is capable of some pretty impressive feats. So let's try to keep an open mind, shall we? Besides, it's not like

      • In all colleges and universities, teachers have long learned to use various applications and programs that check students' essays for plagiarism. But I think that if you use quality materials and essay writing services with a good reputation, then no service can help the professor determine whether you wrote your paper or not. Last month I wrote an essay on free speech, I had little time and I used the help of https://samplius.com/free-essay-examples/free-speech/ [samplius.com] because my scholarship depended on the quali
      • by guruevi ( 827432 )

        And did the text say anything substantive? Other than ad hominem, it didn’t appear to make a case for itself.

    • The problem with AI is that if it is writing an essay, it has no clue what it is actually doing.

      In my opinion, everything it has spit out is basically reiterating on the prompt over and over again, basically saying the same thing two or three times and then making an awkward jump into adjacent subject and repeating that. Any human should be able to detect a large body of text being generated by AI, if they actually read the text.

      The problem is teachers donâ(TM)t read essays, they could just put it in an auto grader and thatâ(TM)s that.

      So if you can detect it is semantically spitting out the same words sentence after sentence with repetitive jumps to other subjects, youâ(TM)ve detected AI.

      Hmm, this description also fits the current US veep.

  • I wrote one too! (Score:5, Insightful)

    by null etc. ( 524767 ) on Friday January 06, 2023 @04:38PM (#63186076)

    I wrote a ChatGPT detector the day that ChatGPT was released to the public! It works really good, too.

    I was about to release it to the world, but then my AI-detecting AI program advised me that if I don't release the source code, and don't show proof of my algorithm's effectiveness by running it against several thousands of samples, I can just claim to the world that I've achieved success without needing to actually prove it at all!

    • Re:I wrote one too! (Score:4, Interesting)

      by fahrbot-bot ( 874524 ) on Friday January 06, 2023 @05:13PM (#63186190)

      I wrote a ChatGPT detector the day that ChatGPT was released to the public! It works really good, too.

      His tweet listed in TFA notes:

      "I spent New Years building GPTZero ..."

      meaning he spent either one day, one weekend, or one holiday week developing this, when most people were drinking and partying, so it *must* be good and accurate. /sarcasm

      Going forward, can't wait for someone who actually wrote their essay to get erroneously flagged by this (or another app) as having the essay written by AI and then marked down/failed and see how that all falls out...

  • by Ungrounded Lightning ( 62228 ) on Friday January 06, 2023 @04:41PM (#63186080) Journal

    ... and how many students will be falsely flagged as AIgiarists by this tool (or others like it).

    • by sinij ( 911942 )
      Putting new meaning into monotone and robotic writing.
    • Just to be safe they'll want to run the tool against the essay that they totally wrote all by themselves. If it flags they'll want to tweak it up a bit till it correctly identifies as being totally honest-to-goodness written by them.
      • That will be simple. Just put in a few spelling and grammar errors because the AI is programmed not to do such things.
        • Nor are students if they use regular spelling/grammar checking functionality of the application they use to write the essay. If it would be that easy to fool the AI, yu could just use the AI written essay and change some grammar/spelling.
          • If people actually paid attention to spelling checkers, there wouldn't be so many misspelled words in Slashdot posts. Alas, browsers don't yet check grammar.
            • Or it's also more to do with typing on a mobile keyboard and don't really care to check the short post they did, just like I saw I missed the o in you in my post. But then again, an essay is something completely different as a short post on a site like slashdot. An essay normally is graded for your educational progress, and a short slashdot post nobody cares about.
              • In all colleges and universities, teachers have long learned to use various applications and programs that check students' essays for plagiarism. But I think that if you use quality materials and essay writing services with a good reputation, then no service can help the professor determine whether you wrote your paper or not. Last month I wrote an essay on free speech, I had little time and I used the help of https://samplius.com/free-essa... [samplius.com] because my scholarship depended on the quality of the assessment
    • ... and how many students will be falsely flagged as AIgiarists by this tool (or others like it).

      What is an Algiarist?

  • by fropenn ( 1116699 ) on Friday January 06, 2023 @04:42PM (#63186084)
    Teachers should assume any take-home writing assignment will be influenced by friends, Wikipedia, paid writing centers, etc. This is how writing happens in the real world. Chat Bots are just another tool.

    If you really want to know what a student can do on their own, then sit them with a pencil and a piece of paper in a silent faraday cage for an hour and see what they produce. But that would be pointless, because that's not how anyone in the real world writes.
    • by gweihir ( 88907 )

      Far easier: Have them give a talk about the text they wrote and make them answer questions. I never, ever had to just hand in a writing assignment after finishing school. I always had to present and defend it. Of course that was in CS studies (MA and PhD), not some field were writing is a core-skill...

  • Student finds himself no longer invited to parties.

  • Could work (Score:4, Insightful)

    by timeOday ( 582209 ) on Friday January 06, 2023 @05:22PM (#63186216)
    The key here is that ChatGPT is closed-source, and that OpenAI has no interest in hiding ChatGPT's authorship.

    That said, even with the above handicaps (i.e. one side isn't even playing the cat-and-mouse game), I suspect the detector could be fooled, just by, for example manually-rewriting a phrase within the text and re-submitting it to ChatGPT for continuation from that point. Or something comparably simple. No way to test it at this point.

    Long-term, there will not be a reliable detector. Any such detector would require detecting statistical regularities that could then be automatically avoided. Without even searching I'm sure somebody has already used generative adversarial networks for this.

    • Re:Could work (Score:5, Insightful)

      by kaur ( 1948056 ) on Friday January 06, 2023 @05:49PM (#63186268)

      Long-term, there will not be a reliable detector. Any such detector would require detecting statistical regularities that could then be automatically avoided.

      1) A text generator is created.
      2) A tool to check for generated text is created.
      3) A new version of generator is released. It includes a test from step 2), and only outputs text that passes the test as clean.
      4) ...

      Finally nothing, not even a body of human experts, can distinguish ai- and human-generated text.

      We have had a similar fight between viruses and anti-viruses for a long time. Right now antiviruses are winning because they look at the behaviour of the virus (dynamic analysis) rather than its code (static analysis). But advertisements have blurred the line between "good" and "bad" code so that nobody can really say what is good or bad. The world seems to happily accept such compromise. My browser renders both real Slashdot content and ads all the same, and I don't care.

      I guess the same will happen with text.
      We will just accept that some of it is not real human speech.

  • by presidenteloco ( 659168 ) on Friday January 06, 2023 @05:36PM (#63186244)
    On a bad or unlucky day, a human might write just like chatGPT would, especially for a short thing of a paragraph or two.

    So I'm not sure what the value of this is. Maybe a prof/teacher would use it on a student's alleged work, then confront them in an interrogation and hope that they break and confess. Doesn't seem that reasonable of a process.
    • Maybe a prof/teacher would use it on a student's alleged work, then confront them in an interrogation and hope that they break and confess.

      This is what happens when they fail the Turing test.

    • by mark-t ( 151149 )

      This is exactly what I was thinking as well... or maybe not necessarily so bad or unlucky, to be honest. I actually think it's quite probable.

      What concerns me is the chance that the cheaters are going to take additional precautions to avoid detection by using this tool themselves to ensure that what they hand in has a low chance of being seen as automatically generated, while people who don't cheat in the first place and don't cross check their submissions with such tools may actually be more likely to

    • by gweihir ( 88907 )

      Naa, just us it as an indicator of "low quality, insightless reasoning". That way you can fail them for the quality, not because of plagiarism. ChatGPT produces a lot of bullshit when asked real questions and has a bad tendency to make things up.

      • So it's like your average user on slashdot and tumblr. :P

        That joke from XKCD about forcing insightful responses to comment threads as proof of not-being-a-bot comes to mind.
  • If you use a publicly available AI-paper detecting tool, then when you have ChatGPT, you only need to run this tool against it to see if it flags it. If it flags it, regenerate, or change the output enough that it's no longer flagged. Remember, this is a cat/mouse/cat/mouse thing. Make a better forgery detector and you'll just end up with better forgeries.

  • It's already here. For advertising clicks, this type of nonsense is already being generated

    https://www.systranbox.com/unlock-your-potential-exploring-the-many-ways-to-utilize-your-linux-experience/

    I've seen this template all over the place when searching for technical info on configuring mail servers, and similar. The search engines, as garbage as they are, are going to get much worse. They should filter this crap out, but that would require .... intelligence?

    or "the dangers of eating rocks" yields this gem
  • AI written essay may be detectable, but when the AI incorporates the results of AI detectors in its cost function, it'll be a mess. And more AI will come up with better measures to classify AI writings from genuine Human writings.

    But would it matter? For the next obvious big thing to do is to have AI readers. Who has time to read all that stuff? Have a robot summarize it.

    Once we have both ends of the literature pipeline totally AI-ized, no Human will ever write anything, or ever read anything. It's all bi

  • But watermarking seems like a good solution?

    https://www.newscientist.com/a... [newscientist.com]
  • Article leaves out the key word "allegedly." Does he have experimental statistics on false negatives vs false positives? Has it been peer reviewed? Right, so just someone making unsubstantiated claims.

  • Nature invents a better idiot.
    • Personally, I have nothing against services that help create content. Creativity is my weak point, so I use them from time to time as well. I recently had to write an essay on Hamlet, and this resource https://supremestudy.com/essay... [supremestudy.com] helped me a lot because there are quite a few examples of good content. Some people have a more technical flair and find it very difficult to write a literature review or something like that.
  • I'm not sure that it can be trustful.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...