Researchers Warned Against Using AI To Peer Review Academic Papers (semafor.com) 17
Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. From a report: With recent advances in large language models, researchers have been increasingly using them to write peer reviews -- a time-honored academic tradition that examines new research and assesses its merits, showing a person's work has been vetted by other experts in the field. That's why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don't explicitly ban the use of AI to process manuscripts, and the language can be fuzzy.
The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.
The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.
Re:Fraud (Score:5, Informative)
Re: (Score:3)
Here's how I look at it: generative AI is dsigned to create plausible-looking output in response to a prompt. That's how the lawyer who submitted a ChagGPT-generated brief got caught. The references the AI generated for the brief looked plausible enough to pass a cursory inspection, even by an expert, but if you looked them up they didn't exist.
This wasn't a failure of the AI; the AI did exactly what it was designed to do. It was the fault of people who relied on it to do something it wasn't designed to
Re: (Score:2)
Re: (Score:1)
Exactly. Researchers should be teaming with AI to find fraud, bias and plagiarism. But AI shouldn't have the final word since it is just an investigative tool.
It sounds like some academics are fearful of the losing their jobs or status because AI might be good at this job.
If they have nothing to hide then they have nothing to worry about.
Re: (Score:3)
Peer reviewers are volunteers who don't get paid, at least that's not the norm in science. Arguably they should be given the importance of the task. If you've ever seen peer review comments, some of them are obviously phoned in. Occaisionally institutions will offer honoraria for reviewing proposals -- typically $200 or so. This is not a lot of money considering how much work it is.
What if your "peers" are just a$$holes? (Score:2)
People suck as a general rule and you can't tell me that some "peers" have deliberately trying to bork someone else's career in the process of reviewing a paper.
Seems to me that an AI, assuming that it was trained honestly which is another matter entirely, would review the paper without bias.
Much more likely your peers are in a circle jerk. (Score:2)
If peer review actually works the asshole tanking another's work will be called out on it by other peers.. See how that works?
Unbiased by Reality (Score:3)
Seems to me that an AI, assuming that it was trained honestly...would review the paper without bias.
That's certainly true but unfortunately current AIs are often unbiased by reality which, when reviewing a science paper, is generally not a good thing. That's the problem with predictive text engines: all they do is predict what word looks "best" next in the sentence. R
Re: (Score:2)
It's really doubtful that the AI could spot any of the mistakes that would cause someone to reject a paper (even if the reviewer did agree
Re: (Score:2)
People suck as a general rule and you can't tell me that some "peers" have deliberately trying to bork someone else's career in the process of reviewing a paper.
Definitely that. But also there's a broader phenomenon. In any given field some sub areas are incredibly protective, combative and outright nasty and others are full of fluffy bunnies and unicorns. The latter tend to be much more pleasant and give each other nice, well written, thoughtful reviews rather than performatively piling on heaps of invecti
Contradiction (Score:3)
There is an interesting logical contradiction in the whole concept of using AI to review papers.
First - disclaimer - I think the idea is ethically and intellectually wrong and corrupt. If you received complimentary attention that an editor wants your review, it implies you have achieved some credibility and recognition for your own prior work. One would think you are intellectually stalwart, robust, and curious enough to do the work yourself. The review process itself should be a joy and satisfaction for the reviewer. If you are a cheater who uses AI to write a review, perhaps you used AI to earn the credentials that have won the invitation. Some cheaters and liars don't get caught, but many do, and if you do, it might jeopardize your entire prior work, credibility, and career. If you cannot see the fun in using your own brain to write a thoughtful review, decline the invitation. And, if it catches up with you, good.
Now, the contradiction:
Research papers are supposed to present new information, new data, new analysis. Getting down to the finest details and points made of such-and-such particular manuscript, there is nothing it can be compared to, because it is unique and novel. So, how can an AI have been trained on what is being reviewed? It cannot. It might estimate or approximate based on prior art and evidence. But, it risks coming to a conclusion that the new work is invalid because it is inconsistent or contradictory to old information that it has already "learned".
Maybe it can catch plagiarism, but I cannot see any validity to it pretending to review bona fide new or novel research in a bona fide legitimate manuscript.
(And then, what if the work and the paper were BS to begin with, worthy of negative reviews and rejection, but nonetheless the public AI is allowed to learn or retrain on what it just read? This kind of update retraining seems to be a trend among AI dudes talking about how their company is "improving" their models. If that happens, then in essence that bad paper was thereby published, bypassing controls, and having a negative impact on the collective body of knowledge.)
The whole idea of AI to do a review is just plain stupid - ethics, morals, integrity, and intellectualism aside.
Re: (Score:1)
It's a bad idea for processing new information. It may be a good idea for quickly catching the following:
- Plagiarism
- Regurgitation (unintended plagiarism).
- Folks using AI (simply a restatement of the above points)
If you tasked AI with the above, you'd have more mental bandwidth to do a quality review.
Wait until the students graded by AI enter... (Score:2)
Two issues (Score:2)
"Sharing" papers with "AI" has two separate issues.
The first issue is privacy. Some cloud-based AI engines incorporate queries into their model. Just like most companies forbid leaking confidential data to these cloud-based AI engines, conferences likewise forbid leaking submitted papers. It's not so much that the papers cannot be shared, because secondary reviewers are usually allowed, but the primary reviewer has the responsibility of ensuring the confidentiality of the paper when in the hands of the s