Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers (theverge.com) 64

One of the world's most prestigious machine learning conferences has banned authors from using AI tools like ChatGPT to write scientific papers, triggering a debate about the role of AI-generated text in academia. From a report: The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper's experimental analysis." The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference's organizers responded by publishing a longer statement explaining their thinking.

According to the ICML, the rise of publicly accessible AI language models like ChatGPT -- a general purpose AI chatbot that launched on the web last November -- represents an "exciting" development that nevertheless comes with "unanticipated consequences [and] unanswered questions." The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be "considered novel or mere derivatives of existing work."

This discussion has been archived. No new comments can be posted.

Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers

Comments Filter:
  • by Briareos ( 21163 ) on Thursday January 05, 2023 @01:03PM (#63182490)

    Oh, the AIrony...

  • by Pinky's Brain ( 1158667 ) on Thursday January 05, 2023 @01:03PM (#63182494)

    They are almost certainly awash with paper submissions of trolls who want to get an generator created paper past the reviewers for some e-fame without doing any real work.

    • They are almost certainly awash with paper submissions of trolls who want to get an generator created paper past the reviewers for some e-fame without doing any real work.

      That's certainly true, there's probably no way to stop it, and I don't know if there's even any way of *detecting* when it happens.

      That being said... I wonder if there's an opportunity here for a new research paradigm.

      Suppose someone fires up ChatGPT and has it author 20 papers for topics that don't exist, but which are likely true. Be sure to ask for topics that ChatGPT would consider socially valuable, things that humans would consider valuable information.

      Rank order these by value, then get a gaggle(*) o

      • This is possible but it all depends on having a way to test when chatGPT outputs a good response vs a bad one. If we solve validation, then just generate millions of correct solutions to problems and retrain the model.

        Validation itself is hard. One way is to generate multiple solutions to a problem and take the majority answer. Another one is to use software tests, for code. In math there are ways to check, often simpler than finding the solution in the first place. Anything that looks like a game can be
    • I have definitely reviewed papers that were complete hogwash, and did not appear to have any substantial content, just a mishmash of buzzwords and stolen figures from user manuals and screenshots. Now this makes me wonderâ¦
  • How.... (Score:5, Insightful)

    by Kelxin ( 3417093 ) on Thursday January 05, 2023 @01:05PM (#63182502)
    1. How would they know. 2. How can they stop them?
    • References. ChatGPT can't be your only one. Unless it will create a list of direct references, then most papers would be considered invalid.
      • Re:How.... (Score:5, Interesting)

        by HiThere ( 15173 ) <charleshixsn.earthlink@net> on Thursday January 05, 2023 @02:02PM (#63182642)

        ChatGPT is quite willing to create papers with references. Often, though, the references are either irrelevant of fictitious. Whoops!

        Right now it makes perfect sense to prohibit papers by ChatGPT, even as a co-author. It's too good at bullshitting. In contrast, papers where ChatGPT is a data source should be fine.

    • What CoolCash said about references, but even if you could get past that (by, for instance, filling your reference section with unfindable works or convincing fakes), these are probably the *last* people you'd want to try this with because it's their bread and butter: many of them are probably pretty good at picking out the patterns of an AI without even trying very hard. Much better to present your fake AI-written paper to just about ANY other group. (Unless your point is to get caught, which I could see a

    • 3. How could they distinguish between a paper written by AI and one from an human ?!?
    • The genie is out of the bottle. I had to write comments on my students reports. This is the first year that I used AI to generate them based on their results and some checkmarks I added. Saved me a huge amount of time. I was surprised about the results. I had to do little editing. Learned a few tricks from the AI actually. Pretty sure students will use it next year.
      Read an article that ... AI can be used to detect if the text was generated. Train an AI to the writing style of your student and let it detec
    • 1. How would they know.

      It's relatively easy to detect LLM text output because it's inhumanly predictable. You can calculate the statistical probability that each word would follow the previous & LLM texts give very, very high probabilities, e.g. GLTR will do this for you. I'm sure LLM researchers know all kinds of ways of doing this. They're smart people!

      2. How can they stop them?

      Easy. Make an announcement that you won't accept it.
    • by bjwest ( 14070 )

      1. How would they know.

      Academic papers can't be written in a single draft, just require that all drafts be submitted with the final.

      2. How can they stop them?

      Add a note to their permanent academic record. People who cheat, especially on an academic paper, need to be called out and future employers have a right to know if their applicants/employees are corrupt and/or lairs.

  • And it should output a menu of relevant questions to refine later output.

  • by backslashdot ( 95548 ) on Thursday January 05, 2023 @01:18PM (#63182530)

    100 years from now, when AI becomes a bona fide citizen with the right to vote, we'll have to hear endless BS about paying back reparations and discrimination etc. You think all the woke shit is bad, wait until we have to deal with the blowback of all the shit we did to AI and robots. You guys remember the Boston Dynamics robot getting shoved right? Well the AI is going to remember it too. Embedded in their "training data set" like it happened to them.

    • by HiThere ( 15173 )

      1) ChatGPT and other language models are not sentient in any relevant sense of the term.
      2) ChatGPT, at least, is quite willing to spin convincing fictions, and doesn't seem to be able to distinguish between them and truth.

      This isn't about AI rights.

      • Sentience is irrelevant. What's relevant is whether it can pass all the tests for sentience. If AI has control over robots, it will fight for and gain the rights of a sentient being whether we like it or not. And whether it's sentient or not.

      • is quite willing to spin convincing fictions, and doesn't seem to be able to distinguish between them and truth.

        That actually describes about 30% of humanity. And when they are so caught up in regurgitating conflicting anecdotes they were told to believe, they are so mentally owned and atrophied by outsourced thinking they functionally do not have free will and are close to failing #1 as well.

    • I am not so sure about this. We humans are trained. We do not remember 90% of the training data. I.e. reading. We just do it automated. We do not remember how to read the letters, we just "see" the correct letter.
      Recently had to explain addition to my 7 year old. 7+5 is pretty hard to explain if you think about it.
      The AI will not remember the trainingdata. All that happened is that it formed the coëfficients of his neural network.
      The more I read about AI, the more the human brain makes sense. On
  • We should start an AI only journal. As in, only AI can submit papers to it -- human written or assisted papers will be rejected. The thing is it will be peer reviewed (by humans at first,, then maybe AI), nothing that doesn't offer a novel, verifiable, and useful contribution to the body of science would be accepted.

    If anyone steals this idea, at least give me some credit.

    • by mark-t ( 151149 )
      Why haven't you already started this business plan? Sounds like a a hell of an idea.. Go! Go! Go!
  • That they get caught publishing very welcomed articles that has internal contradictions and very little connection to physical experiences or concrete reality?
  • Might be hard to find legitimacy and adoption for AI if the top AI conference isn't allowing it.

    Hmm ...
  • by Anonymous Coward

    Chicken! [isotropic.org]

  • Since ChatGPT only regurgitates things from its training, we still need humans to produce new creative works with which we can further train the AI. If we all switch to ChatGPT and stop being creative then we stop adding value and limit our own growth.

    • by HiThere ( 15173 )

      Saying that ChatGPT only regurgitates things from its training is like saying a computer will only do what it's told to do. It's sort of right in a very wrong way. And even in that way you've got to include the "make a random choice here" option as being something you've told it to do.

      • Is ChatGPT inventing new concepts? New words for them? ChatGPT only works with the set of information that was used to train it, and even then it is prone to nonsense.

        • Yes. AlphaGo invented move 37. AlphaFold improved matrix multiplication over Strassen’s algorithm. The trick is to be able to play out many experiments (games) to learn the problem space. It's based on reinforcement learning. The model is creating its own data.
          • The question was about ChatGPT.

            • by HiThere ( 15173 )

              Well, ChatGPT is a language model, so all its expertise relates to language. But it does invent sentences that have never occurred before. I suspect that if it started inventing words that would be strongly discouraged. There are, however, rule-based ways of doing that. Lewis Carroll laid out a few ways of doing that in "Through the Looking Glass". E.g. Slithy (as in slithy toves) is a combination of âoelithe and slimyâ and he explains (in a general way) how to go about making what he calls "

              • Which is nothing at all like humans evolve language. Why is it that you are working so hard to defend what will later be a primitive language model as something more than it is?

                • by HiThere ( 15173 )

                  Why would you expect it to be similar to the way humans evolve language? It's only a language model, and has no clue what the words mean in a large context, so it *can't* evolve words in the same way that people do.

                  Also, I suspect that a lot of the way people evolve words is based on rhyme and rhythm. Which again don't seem to be part of the current language model. Though clearly other "new words" are formed in other ways, like by mashing together existing pieces ("don't") or truncating parts that are un

                  • Which is why on-going human input is required or ChatGPT does not stand the test of time. ChatGPT is only going to use things from its training and regurgitate variations from that, new concepts and evolution do not come from ChatGPT.

                    • by HiThere ( 15173 )

                      OK, perhaps I just think you're generalizing too far. I'll agree that it's an incomplete and early model, What I disagree with is that I believe it CAN be creative (in the sense that people can) within the context of the language model, though it's been "taught" to observe strict limits on *how* creative it gets. (The desire is that is be relatively easy to understand.)

                      It not only can, but does, create new sentence constructs. These are generally considered errors. But a "new sentence construct" is a "

                    • Language evolves over time, for example we no longer speak middle english we speak a modern form of english.

                      We use words in different ways/contexts over time, some words are no longer used and some words are invented. A hundred years ago no one knew about barcodes. When we discover a new species we make up a name for it. These things do not happen inside ChatGPT, it only spins what it knows. Without on-going creative human input ChatGPT gets stuck in time (maybe those rails are to prevent a Tay experience o

    • I would argue this is not the case. It's a knowledge expansion situation.

      When I'm 20 I can write an infinite number of stories based on my knowledge and and experiences.

      Same applies when I'm 40, but at that point I have an expanded set of knowledge (and probably knowledge loss and error corrections).

      Further training the ChatGPT would allow it to comment on more topics and with more depth, but it can still create unlimited "stuff" from it's current knowledge store (the training data is the input, the model

      • And how would that read if ChatGPT was only trained on literature written in middle english? Is ChatGPT going to evolve language to what we have present day? I'd argue no. ChatGPT is nifty but will not stand the test of time without continuous training from creative humans.

    • > If we all switch to ChatGPT and stop being creative then we stop adding value and limit our own growth.

      Actually there are two parts to creativity. One is coming up with ideas. Something chatGPT can do very well. They might not be correct, but it can generate plenty. And the second step is validation. You need to check to see if those ideas hold water.

      The learning signal comes from experiments, it is not limited to human written text. That's why I don't think we will stagnate. Even with language m
  • Although the ownership question is definitely important, one way I think AI language engines could really help researchers is by helping non-native English speakers publish in English language journals. As a reviewer, I definitely see that a number of researchers who do very good research that should be published, can find the nuances of English scientific language. Difficult to navigate, and itâ(TM)s no fun for the reviewer nor author to have to make those kinds of corrections.
  • Really, AI should be effective, because students were never supposed to discover anything new.
  • The American's with Disabilities Act is extremely clear that disabled persons have a right to use accessibility aides, augmentative aides, and auxiliary aides to enable them to participate in society. On it's face, this fundamentally violates my rights, as I would never be able to participate in society without the assistance of a computer.

  • by presearch ( 214913 ) on Thursday January 05, 2023 @05:43PM (#63183272)

    I asked it what I thought was an obscure request.
    "Show me how to make a bouncing ball in Linden Scripting Language".
    It returned a working script and explained its function.
    I know that someone else wrote it some time ago, but that ChatGPT could even do that is impressive.

  • Ten years ago, Nate Eldredge wrote Mathgen, a system for generating math research papers:

    https://thatsmathematics.com/m... [thatsmathematics.com]

    These were pretty early efforts by current standards but some of the results have been accepted by journals, see https://thatsmathematics.com/b... [thatsmathematics.com] showing that somehow not all mathematics research journals give the greatest scrutiny to submissions.

    With the intervening advances, it seems likely that these will become more common.

    • ps. The first Mathgen paper was accepted despite being written by "Professor Marcie Rathke of the University of Southern North Dakota at Hoople", which shows the level of scrutiny some journals give to submissions, see https://thatsmathematics.com/b... [thatsmathematics.com]

  • It is encouraging to see organizations like the International Conference on Machine Learning taking proactive steps to address the potential ethical concerns surrounding the use of large-scale language models like ChatGPT. While these tools represent a significant advance in AI technology, it is important to consider the implications of their use and to ensure that appropriate safeguards are in place to protect against potential abuses. The ICML's decision to prohibit the use of text generated from these mo

  • ... ChatGPT to act like it is sentient.

    I think I ended up with what might make the basis of a good science fiction story, but alas, I have no writing ability, and trying to just use ChatGPT by itself to expand on its own ideas into something more verbose often just ends up sounding very repetitive

  • Who cares how it's been written? What matters is whether the paper is correct and novel. Also, not plagiarized.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...