Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers (theverge.com) 64
One of the world's most prestigious machine learning conferences has banned authors from using AI tools like ChatGPT to write scientific papers, triggering a debate about the role of AI-generated text in academia. From a report: The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper's experimental analysis." The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference's organizers responded by publishing a longer statement explaining their thinking.
According to the ICML, the rise of publicly accessible AI language models like ChatGPT -- a general purpose AI chatbot that launched on the web last November -- represents an "exciting" development that nevertheless comes with "unanticipated consequences [and] unanswered questions." The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be "considered novel or mere derivatives of existing work."
According to the ICML, the rise of publicly accessible AI language models like ChatGPT -- a general purpose AI chatbot that launched on the web last November -- represents an "exciting" development that nevertheless comes with "unanticipated consequences [and] unanswered questions." The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be "considered novel or mere derivatives of existing work."
AI banned from AI conference (Score:5, Funny)
Oh, the AIrony...
Re: (Score:2)
Agreed, but I can't quite decide on whether it should be +1 or -1... decisions, decisions...
Re: (Score:2)
do you need assistance? type yes or no.
Re: (Score:2)
nyoes?
Re: (Score:2)
i was gonna say oh the AImanity, but like yours better
Re: (Score:2)
i just had an aipiphany: this is the prophesied singularity: the machine can produce credible papers faster than we can peer review them. we're very screwed! =:o
Re: (Score:3)
Then we'll just need a paperclip maximizer to hold them all together... uh oh...
Re: (Score:2)
Probably just trying to get rid of the trolls (Score:3)
They are almost certainly awash with paper submissions of trolls who want to get an generator created paper past the reviewers for some e-fame without doing any real work.
A new source for science? (Score:2)
They are almost certainly awash with paper submissions of trolls who want to get an generator created paper past the reviewers for some e-fame without doing any real work.
That's certainly true, there's probably no way to stop it, and I don't know if there's even any way of *detecting* when it happens.
That being said... I wonder if there's an opportunity here for a new research paradigm.
Suppose someone fires up ChatGPT and has it author 20 papers for topics that don't exist, but which are likely true. Be sure to ask for topics that ChatGPT would consider socially valuable, things that humans would consider valuable information.
Rank order these by value, then get a gaggle(*) o
Re: (Score:2)
Validation itself is hard. One way is to generate multiple solutions to a problem and take the majority answer. Another one is to use software tests, for code. In math there are ways to check, often simpler than finding the solution in the first place. Anything that looks like a game can be
Re: Probably just trying to get rid of the trolls (Score:1)
How.... (Score:5, Insightful)
Re: (Score:3)
Re:How.... (Score:5, Interesting)
ChatGPT is quite willing to create papers with references. Often, though, the references are either irrelevant of fictitious. Whoops!
Right now it makes perfect sense to prohibit papers by ChatGPT, even as a co-author. It's too good at bullshitting. In contrast, papers where ChatGPT is a data source should be fine.
Re: (Score:2)
What CoolCash said about references, but even if you could get past that (by, for instance, filling your reference section with unfindable works or convincing fakes), these are probably the *last* people you'd want to try this with because it's their bread and butter: many of them are probably pretty good at picking out the patterns of an AI without even trying very hard. Much better to present your fake AI-written paper to just about ANY other group. (Unless your point is to get caught, which I could see a
Re: (Score:2)
Re: (Score:3)
Read an article that
Re: (Score:2)
It's relatively easy to detect LLM text output because it's inhumanly predictable. You can calculate the statistical probability that each word would follow the previous & LLM texts give very, very high probabilities, e.g. GLTR will do this for you. I'm sure LLM researchers know all kinds of ways of doing this. They're smart people!
2. How can they stop them?
Easy. Make an announcement that you won't accept it.
Re: (Score:3)
1. How would they know.
Academic papers can't be written in a single draft, just require that all drafts be submitted with the final.
2. How can they stop them?
Add a note to their permanent academic record. People who cheat, especially on an academic paper, need to be called out and future employers have a right to know if their applicants/employees are corrupt and/or lairs.
Input a question into a language model.... (Score:1)
And it should output a menu of relevant questions to refine later output.
Blowback coming soon. (Score:4, Insightful)
100 years from now, when AI becomes a bona fide citizen with the right to vote, we'll have to hear endless BS about paying back reparations and discrimination etc. You think all the woke shit is bad, wait until we have to deal with the blowback of all the shit we did to AI and robots. You guys remember the Boston Dynamics robot getting shoved right? Well the AI is going to remember it too. Embedded in their "training data set" like it happened to them.
Re: (Score:2)
1) ChatGPT and other language models are not sentient in any relevant sense of the term.
2) ChatGPT, at least, is quite willing to spin convincing fictions, and doesn't seem to be able to distinguish between them and truth.
This isn't about AI rights.
Re: (Score:2)
Sentience is irrelevant. What's relevant is whether it can pass all the tests for sentience. If AI has control over robots, it will fight for and gain the rights of a sentient being whether we like it or not. And whether it's sentient or not.
Re: (Score:2)
Sentience is irrelevant. What's relevant is whether it can pass all the tests for sentience. If AI has control over robots, it will fight for and gain the rights of a sentient being whether we like it or not. And whether it's sentient or not.
I've seem some of the crap produced by ChatGPT, I guess it is as dumb as some people I've met...
Re: (Score:2)
Only because it will be convincing to stupid people.
Re: Blowback coming soon. (Score:1)
So, just like pornography. Close enough for stupid people? That makes me afraid.
Re: (Score:2)
https://www.npr.org/2022/06/16... [npr.org]
Re: (Score:2)
is quite willing to spin convincing fictions, and doesn't seem to be able to distinguish between them and truth.
That actually describes about 30% of humanity. And when they are so caught up in regurgitating conflicting anecdotes they were told to believe, they are so mentally owned and atrophied by outsourced thinking they functionally do not have free will and are close to failing #1 as well.
Re: (Score:2)
Recently had to explain addition to my 7 year old. 7+5 is pretty hard to explain if you think about it.
The AI will not remember the trainingdata. All that happened is that it formed the coëfficients of his neural network.
The more I read about AI, the more the human brain makes sense. On
AI only journal (Score:2)
We should start an AI only journal. As in, only AI can submit papers to it -- human written or assisted papers will be rejected. The thing is it will be peer reviewed (by humans at first,, then maybe AI), nothing that doesn't offer a novel, verifiable, and useful contribution to the body of science would be accepted.
If anyone steals this idea, at least give me some credit.
Re: (Score:2)
What are they afraid of? (Score:1)
effects of this (Score:2)
Hmm
Could an AI genrate a "Chicken" paper? (Score:1)
Chicken! [isotropic.org]
Re: (Score:3)
This is brilliant, and should win an award for outstanding scientific research.
Re: (Score:2)
The flow charts have me dying.
human value and growth (Score:2)
Since ChatGPT only regurgitates things from its training, we still need humans to produce new creative works with which we can further train the AI. If we all switch to ChatGPT and stop being creative then we stop adding value and limit our own growth.
Re: (Score:3)
Saying that ChatGPT only regurgitates things from its training is like saying a computer will only do what it's told to do. It's sort of right in a very wrong way. And even in that way you've got to include the "make a random choice here" option as being something you've told it to do.
Re: (Score:2)
Is ChatGPT inventing new concepts? New words for them? ChatGPT only works with the set of information that was used to train it, and even then it is prone to nonsense.
Re: (Score:2)
Re: (Score:2)
The question was about ChatGPT.
Re: (Score:2)
Well, ChatGPT is a language model, so all its expertise relates to language. But it does invent sentences that have never occurred before. I suspect that if it started inventing words that would be strongly discouraged. There are, however, rule-based ways of doing that. Lewis Carroll laid out a few ways of doing that in "Through the Looking Glass". E.g. Slithy (as in slithy toves) is a combination of âoelithe and slimyâ and he explains (in a general way) how to go about making what he calls "
Re: (Score:2)
Which is nothing at all like humans evolve language. Why is it that you are working so hard to defend what will later be a primitive language model as something more than it is?
Re: (Score:2)
Why would you expect it to be similar to the way humans evolve language? It's only a language model, and has no clue what the words mean in a large context, so it *can't* evolve words in the same way that people do.
Also, I suspect that a lot of the way people evolve words is based on rhyme and rhythm. Which again don't seem to be part of the current language model. Though clearly other "new words" are formed in other ways, like by mashing together existing pieces ("don't") or truncating parts that are un
Re: (Score:2)
Which is why on-going human input is required or ChatGPT does not stand the test of time. ChatGPT is only going to use things from its training and regurgitate variations from that, new concepts and evolution do not come from ChatGPT.
Re: (Score:2)
OK, perhaps I just think you're generalizing too far. I'll agree that it's an incomplete and early model, What I disagree with is that I believe it CAN be creative (in the sense that people can) within the context of the language model, though it's been "taught" to observe strict limits on *how* creative it gets. (The desire is that is be relatively easy to understand.)
It not only can, but does, create new sentence constructs. These are generally considered errors. But a "new sentence construct" is a "
Re: (Score:2)
Language evolves over time, for example we no longer speak middle english we speak a modern form of english.
We use words in different ways/contexts over time, some words are no longer used and some words are invented. A hundred years ago no one knew about barcodes. When we discover a new species we make up a name for it. These things do not happen inside ChatGPT, it only spins what it knows. Without on-going creative human input ChatGPT gets stuck in time (maybe those rails are to prevent a Tay experience o
Re: (Score:3)
I would argue this is not the case. It's a knowledge expansion situation.
When I'm 20 I can write an infinite number of stories based on my knowledge and and experiences.
Same applies when I'm 40, but at that point I have an expanded set of knowledge (and probably knowledge loss and error corrections).
Further training the ChatGPT would allow it to comment on more topics and with more depth, but it can still create unlimited "stuff" from it's current knowledge store (the training data is the input, the model
Re: (Score:2)
And how would that read if ChatGPT was only trained on literature written in middle english? Is ChatGPT going to evolve language to what we have present day? I'd argue no. ChatGPT is nifty but will not stand the test of time without continuous training from creative humans.
Re: (Score:2)
Actually there are two parts to creativity. One is coming up with ideas. Something chatGPT can do very well. They might not be correct, but it can generate plenty. And the second step is validation. You need to check to see if those ideas hold water.
The learning signal comes from experiments, it is not limited to human written text. That's why I don't think we will stagnate. Even with language m
Good for non-native speakers (Score:1)
Goal: Validate Professor's Existing Knowledge (Score:2)
The law says they can't (Score:2)
The American's with Disabilities Act is extremely clear that disabled persons have a right to use accessibility aides, augmentative aides, and auxiliary aides to enable them to participate in society. On it's face, this fundamentally violates my rights, as I would never be able to participate in society without the assistance of a computer.
It's pretty good (Score:3)
I asked it what I thought was an obscure request.
"Show me how to make a bouncing ball in Linden Scripting Language".
It returned a working script and explained its function.
I know that someone else wrote it some time ago, but that ChatGPT could even do that is impressive.
Mathgen automatically generats mathematics papers (Score:2)
Ten years ago, Nate Eldredge wrote Mathgen, a system for generating math research papers:
https://thatsmathematics.com/m... [thatsmathematics.com]
These were pretty early efforts by current standards but some of the results have been accepted by journals, see https://thatsmathematics.com/b... [thatsmathematics.com] showing that somehow not all mathematics research journals give the greatest scrutiny to submissions.
With the intervening advances, it seems likely that these will become more common.
Re: (Score:2)
ps. The first Mathgen paper was accepted despite being written by "Professor Marcie Rathke of the University of Southern North Dakota at Hoople", which shows the level of scrutiny some journals give to submissions, see https://thatsmathematics.com/b... [thatsmathematics.com]
This comment was generated by ChatGPT (Score:2)
You know what is a lot of fun, is getting.... (Score:2)
I think I ended up with what might make the basis of a good science fiction story, but alas, I have no writing ability, and trying to just use ChatGPT by itself to expand on its own ideas into something more verbose often just ends up sounding very repetitive
Why? (Score:2)
Who cares how it's been written? What matters is whether the paper is correct and novel. Also, not plagiarized.