
Retraction-Prone Editors Identified at Megajournal PLoS ONE (nature.com) 15
Nearly one-third of all retracted papers at PLoS ONE can be traced back to just 45 researchers who served as editors at the journal, an analysis of its publication records has found. Nature: The study, published in Proceedings of the National Academy of Sciences (PNAS), found that 45 editors handled only 1.3% of all articles published by PLoS ONE from 2006 to 2023, but that the papers they accepted accounted for more than 30% of the 702 retractions that the journal issued by early 2024.
Twenty-five of these editors also authored papers in PLoS ONE that were later retracted. The PNAS authors did not disclose the names of any of the 45 editors. But, by independently analysing publicly available data from PLoS ONE and the Retraction Watch database, Nature's news team has identified five of the editors who handled the highest number of papers that were subsequently retracted by the journal. Together, those editors accepted about 15% of PLoS ONE's retracted papers up to 14 July.
Twenty-five of these editors also authored papers in PLoS ONE that were later retracted. The PNAS authors did not disclose the names of any of the 45 editors. But, by independently analysing publicly available data from PLoS ONE and the Retraction Watch database, Nature's news team has identified five of the editors who handled the highest number of papers that were subsequently retracted by the journal. Together, those editors accepted about 15% of PLoS ONE's retracted papers up to 14 July.
We MUST name and shame publicly! (Score:1)
The scientific community really needs to band together and completely publicly name and shame every time this happens because a retracted paper is an admission that they lied And actively tried to harm it humanity's advancement to the stars by misrepresenting their own contribution to that future.
Re: (Score:1)
Is it me or are these models starting to actually display some rigor in proving that it really is just a few bad apples and a barrel made of apathy that spoils the bunch?
Re: (Score:1)
There was clearly malicious intent to harm science here as proven by the fact that the researchers then went back to China after they had been found out to be a malicious spy that was intentionally trying to damage science for the rest of the world to make China more competitive on the world stage by sabotaging others.
This is actually extremely common in science right now, in that generally if you get a paper published by somebody with a Chinese name it's probably not correct or accurate. It's an epidemic.
Re: (Score:1)
That or it's a handful of courageous individuals who've gone against the grain and published findings which were unpopular in the community and were ultimately pressured to retract over nonsense in the wake of the response.
Re: (Score:3)
I wish I was 20 and this naive again.
Re:We MUST name and shame publicly! (Score:5, Informative)
a retracted paper is an admission that they lied
This is frequently the case, but not always.
Sometimes an author will retract his own work if he later realizes it was based on faulty data or a faulty process.
Sometimes a publisher will tighten its standards and retract something that was considered acceptable under an earlier, more lenient standard.
Re: (Score:3)
Sometimes the content is unpopular in the scientific community and a thin excuse is used when the publisher or author pressured are by entities they are associated with retract in response to political backlash.
Re:We MUST name and shame publicly! (Score:5, Informative)
https://retractionwatch.com/ [retractionwatch.com]
They did.
Papers are retracted for all kinds of reasons, some of which are fraud and misconduct, but others are admirable.
Re: (Score:2)
https://retractionwatch.com/ [retractionwatch.com]
Also see the PNAS article referenced; "The entities enabling scientific fraud at scale are large, resilient, and growing rapidly" : https://www.pnas.org/doi/full/... [pnas.org]
Re: (Score:3)
It's as if it was a bad idea to base hiring and promotion on the number of papers published, and then also pay publishers by the number of papers published.
Someone posted this graph:
https://retractionwatch.com/20... [retractionwatch.com]
WTF happened in 2008/2009? It's when the idea of paying publishers by the paper instead of them having to convince a bunch of librarians they weren't frauds started to take off (PLOS One launched in 2007).
Are the subjects comparable? (Score:4, Insightful)
To point out the obvious, this isn't necessarily evidence of malfeasance. If you look at code contributions at a company, you'll find that a small number of code reviewers miss a disproportionate number of bugs, too, but it is often because they're reviewing code that is hairier than the stuff that other folks are reviewing, making the review process harder.
Are these papers similar to the average paper that the journal(s) normally publish? Are these papers that most people would have refused to review because they seemed questionable even at a glance? Are these papers in areas that are so specialized that nobody can adequately review them, and only a few people were even willing to try?
Do certain groups of authors tend to request the same reviewers because they've worked with them in the past, and is the higher rate of retraction correlated with higher rates of retraction by those specific groups of authors? Or are reviewers assigned randomly as they should be?
Are those reviewers' acceptance rates similar to the acceptance rates for other reviewers? It says they reviewed 1.3% of papers published by the journal and accounted for 30% of the retractions, but that tells us nothing about whether they had a higher acceptance rate than other reviewers. They could easily have published a smaller percentage of papers because they rejected *more* papers, but reviewed papers in areas with a higher rate of mistakes or disagreement about methodology (e.g. maybe they review a disproportionate percentage of meta-analytical papers).
Are these papers being retracted because of things that should have been obvious from reviewing the paper, or were the reasons obvious only after getting more information?
The portion of the (paywalled) article that I could read seems like at least some of these are likely to be situations where authors and reviewers were inadequately independent, which is problematic. This is a strong reason to require at least one randomly algorithmically picked peer reviewer for all papers, chosen by the journal.
Witch hunting no. Accountability yes. (Score:5, Insightful)
We don't need to dox, name, and shame these people. That would not only be abhorrent, witch-hunting behavior, but may significantly disincentivize certain types of desirable, legitimate research publication.
However, I am entirely for a measured, rational, effective, and systematic approach to holding people accountable for proven, bad-faith research publication. The bad apples, few though they may be, do sufficient harm to justify spending effort to sufficiently disincentivize them.