Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

Researchers Warned Against Using AI To Peer Review Academic Papers (semafor.com) 17

Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. From a report: With recent advances in large language models, researchers have been increasingly using them to write peer reviews -- a time-honored academic tradition that examines new research and assesses its merits, showing a person's work has been vetted by other experts in the field. That's why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don't explicitly ban the use of AI to process manuscripts, and the language can be fuzzy.

The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.

This discussion has been archived. No new comments can be posted.

Researchers Warned Against Using AI To Peer Review Academic Papers

Comments Filter:
  • People suck as a general rule and you can't tell me that some "peers" have deliberately trying to bork someone else's career in the process of reviewing a paper.
    Seems to me that an AI, assuming that it was trained honestly which is another matter entirely, would review the paper without bias.

    • And rubber stamp each others questionable work so that they can more easily increase their publish count. Publish or perish/find another 'job'.

      If peer review actually works the asshole tanking another's work will be called out on it by other peers.. See how that works?
    • That is why journal papers always have multiple reviewers. If all your reviewers are against your paper it is unlikely to be non-paper related and, it if is, it at least raises some doubt about exactly who the "a$$hole" is in that situation.

      Seems to me that an AI, assuming that it was trained honestly...would review the paper without bias.

      That's certainly true but unfortunately current AIs are often unbiased by reality which, when reviewing a science paper, is generally not a good thing. That's the problem with predictive text engines: all they do is predict what word looks "best" next in the sentence. R

    • Best case scenario is that the AI still rejects papers that have new or novel results that don't adhere to existing published results. There are some contentious fields where humans will reject something because it doesn't fit with their hypothesis, but it seems like that usually leads to splinter factions for any theories that have at least some amount of popularity.

      It's really doubtful that the AI could spot any of the mistakes that would cause someone to reject a paper (even if the reviewer did agree
    • People suck as a general rule and you can't tell me that some "peers" have deliberately trying to bork someone else's career in the process of reviewing a paper.

      Definitely that. But also there's a broader phenomenon. In any given field some sub areas are incredibly protective, combative and outright nasty and others are full of fluffy bunnies and unicorns. The latter tend to be much more pleasant and give each other nice, well written, thoughtful reviews rather than performatively piling on heaps of invecti

  • by az-saguaro ( 1231754 ) on Wednesday May 08, 2024 @02:47PM (#64457653)

    There is an interesting logical contradiction in the whole concept of using AI to review papers.

    First - disclaimer - I think the idea is ethically and intellectually wrong and corrupt. If you received complimentary attention that an editor wants your review, it implies you have achieved some credibility and recognition for your own prior work. One would think you are intellectually stalwart, robust, and curious enough to do the work yourself. The review process itself should be a joy and satisfaction for the reviewer. If you are a cheater who uses AI to write a review, perhaps you used AI to earn the credentials that have won the invitation. Some cheaters and liars don't get caught, but many do, and if you do, it might jeopardize your entire prior work, credibility, and career. If you cannot see the fun in using your own brain to write a thoughtful review, decline the invitation. And, if it catches up with you, good.

    Now, the contradiction:
    Research papers are supposed to present new information, new data, new analysis. Getting down to the finest details and points made of such-and-such particular manuscript, there is nothing it can be compared to, because it is unique and novel. So, how can an AI have been trained on what is being reviewed? It cannot. It might estimate or approximate based on prior art and evidence. But, it risks coming to a conclusion that the new work is invalid because it is inconsistent or contradictory to old information that it has already "learned".

    Maybe it can catch plagiarism, but I cannot see any validity to it pretending to review bona fide new or novel research in a bona fide legitimate manuscript.

    (And then, what if the work and the paper were BS to begin with, worthy of negative reviews and rejection, but nonetheless the public AI is allowed to learn or retrain on what it just read? This kind of update retraining seems to be a trend among AI dudes talking about how their company is "improving" their models. If that happens, then in essence that bad paper was thereby published, bypassing controls, and having a negative impact on the collective body of knowledge.)

    The whole idea of AI to do a review is just plain stupid - ethics, morals, integrity, and intellectualism aside.

    • >The whole idea of AI to do a review is just plain stupid - ethics, morals, integrity, and intellectualism aside.

      It's a bad idea for processing new information. It may be a good idea for quickly catching the following:
      - Plagiarism
      - Regurgitation (unintended plagiarism).
      - Folks using AI (simply a restatement of the above points)

      If you tasked AI with the above, you'd have more mental bandwidth to do a quality review.
  • ... the science. Suuure they will totally understand that the money saved on their grades [texastribune.org] was worth trusting "AI", but they are asked to not use AI themselves. That train has left the station. Those growing up learning every day how other people's time is too precious to be wasted on them will certainly not waste their own time in return.
  • "Sharing" papers with "AI" has two separate issues.

    The first issue is privacy. Some cloud-based AI engines incorporate queries into their model. Just like most companies forbid leaking confidential data to these cloud-based AI engines, conferences likewise forbid leaking submitted papers. It's not so much that the papers cannot be shared, because secondary reviewers are usually allowed, but the primary reviewer has the responsibility of ensuring the confidentiality of the paper when in the hands of the s

"...a most excellent barbarian ... Genghis Kahn!" -- _Bill And Ted's Excellent Adventure_

Working...