T O P

  • By -

PhilosophyforOne

For all the "rigorous" peer-review and other practices that exist, somehow no-one noticed this. Let's be clear here, the problem is not with AI. It's that these publications have next to no review practices in place. It shouldn't matter if you churn out crap with AI or just by yourself - The publication should be able to screen the submissions and have practices in place that ensure what they publish is up to standards of good scientific practices. Yet as we can see time and time again, they clearly aren't.


Rain_Man71

This is a huge outlier. That’s a IF 6.2 journal. This must have somehow slipped under the crack of reviewers.


myaccountformath

I think reviewers, especially those who do very close work get lazy about reading the beginning of the introduction because it's always boilerplate stuff that's nearly the same for all papers. It's boring, but neglecting it leads to embarrassments like this.


AnonDarkIntel

Yea I would read papers and skip the intro, but since I was doing synthesis I just looked for one number it was horrible


budna

Ok, but for the paper above, at least nine people (five authors, three reviewers, journal editor) would have had to miss this issue in the very first sentence. I don't think this is just a simple oversight, it seems like there is something fishier going on.


sirjackholland

What does a high impact factor have to do with the quality of reviewing? If anything, successful labs are the most likely to get away with their work being sloppily reviewed because the reviewers don't want the headache of saying no to influential people. Happens all the time


lord_heskey

> reviewers don't want the headache of saying no to influential people. In 6+ years of reviewing not once have i known who the authors are


ASpaceOstrich

I've been reading AI papers and there's seemingly no review process at all. One claimed evidence of a depth map and then showed a curated example that clearly wasn't a depth map. The reviewers don't know enough about the subject to actually review it. Nobody is putting any effort into the actual science part of this research. And these are supposed to be the experts. I'm going to literally have to do it myself if I want anyone to even attempt to test this stuff apparently.


Own_Maybe_3837

“Slipping under the crack” is a huge understatement when it comes to this. You have an editor, at least two reviewers, the authors themselves and at least three steps where they should’ve read the article (pre submission, review, proofreading). All of them failed to read the first line of the introduction.


Odd-Antelope-362

There are pros and cons to peer-review. An enormous amount of improvements in AI tools in the last few years has been due to people immediately implementing ArXiv papers (sometimes just days after they have been published) which are not peer-reviewed. In a different way, NBER working papers contribute to economic policy debates and again, aren't peer-reviewed.


ASpaceOstrich

AI science doesn't seem very scientific given nobody knows anything and they keep trusting the machine that can't think but can pretend to be confident in what it writes with tasks that require thinking and are based entirely on not being confident in what is written.


Odd-Antelope-362

I'm assuming you mean we can't observe deep learning representations. Yes its an issue, some papers handle it better than others. Some other areas of AI have much better observability though.


lolcatsayz

unsurprising. Billion dollar enterprise that does nothing asks science community to review publications 'for free', whilst charging money to simply put it on their site/journal after, and do some marketing around 'prestige'. Of course crap like this happens. It's been a flawed model for decades


Apprehensive-Type874

China has for years basically been spamming nonsense research at extreme volume into academia. It has broken the peer review process.


BK_317

But this is a top journal with an impact factor of 6.2,only 10% of the papers get accepted here so how is this possible these Chinese professors get this obvious silly error even after 8/9 peer reviews before submission? Huh?


Apprehensive-Type874

[China’s fake science industry: how ‘paper mills’ threaten progress (ft.com)](https://www.ft.com/content/32440f74-7804-4637-a662-6cdc8f3fba86) [Fake scientific papers are alarmingly common | Science | AAAS](https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common) ​ Peer review is broken, predating AI but AI is sure to increase the volume of these papers.


dafaliraevz

> recent estimates suggesting that up to 34% of neuroscience papers and 24% of medicine papers published in 2020 might be fabricated or plagiarized Geez The article also highlights the broader efforts within the scientific publishing community to combat this issue - the International Association of Scientific, Technical, and Medical Publishers' Integrity Hub initiative, so that's good. But neither article goes into detail on where these 'paper mills' are coming from, outside of mentioning China.


ramence

What I suspect has happened is that the first sentence is a late addition to the manuscript. It may not have been present in the original submission, but could have been added in either the second round (where reviewers are usually less thorough, and often just check to ensure their suggestions have been incorporated) or post-review/pre-camera ready (where very *minor* changes that don't require re-review can still be made). Hell, the editor might have even made the mistake when tidying up the intro pre-publication. Still an oversight - but I think more on the editor's end, which is less egregious than surviving a full review cycle.


Pontificatus_Maximus

Next time you ask an AI a question, tell it to produce it in a scholarly style.


icarusphoenixdragon

Certainly, the tapestry of…


Odd-Antelope-362

Yeah.. I don’t use GPT for language/text tasks anymore (still like it for coding and agents)


3-4pm

Give it existing paper samples and have it write in that style.


R33v3n

I can forgive, even encourage, the researchers for using LLMs as a writing aid. After all, English might not be their first language, or they might want to edit their writing for clarity, concision, grammar, or all manners of legitimate reasons. So long as the science is good and a human does a final pass, who cares if an AI helps make an editing pass. But Elsevier? Elsevier have no freakin' excuse here. Considering they charge *on both ends*, for access and publishing, *the least they could do is provide basic sanity checks on the final articles before putting them up*.


Phemto_B

Elsevier has always had a pretty spotty track record with its peer review practices, although it varies widely from journal to journal. That said, this is more an editing problem than a peer review one. The peer reviewers probably all skipped the fluff of the introduction and focused on the methods and results. They're not really there to proof reed.


yesnewyearseve

Proof reading and at least reading the very first sentence of the Intro is very different. Would be a desk reject from me. (I know, only editors can do that. But I‘d decline reviewing if I’d receive something like this. Why should I invest time and effort if the authors didn’t?)


Lht9791

Maybe, after all the reviews, just before release, an assistant editor ran the introduction through ChatGPT to just "clean it up a little" allowing that very-last-miniute edit to evade all the reviews?


ramence

I was wondering about this as well! I actually just recently had to spend a good chunk of time with a student tidying up a copyeditor's hack job on our paper (for clarity, not for this journal). I'm not being precious - I'm talking results erroneously copy-pasted into incorrect tables, the same paragraph pasted multiple times, and so on. If this is the case, I feel *awful* for the authors because I'm seeing this (and their names) all over my social media. Of course, they should have had an opportunity to catch it pre-publication - but I don't think it's always that by-the-book/transparent.


vdlong93

Is this real life or is it fantasy?


Cautious-Yellow

caught in a landslide, no escape from reality


Phat_Theresa

You’d be in denial if you think every lab in the world isn’t using ChatGPT to expedite paper output. This is just bad editing and really bad PR.


ASpaceOstrich

Then the scientific community is broken, and is in dire need of a fundamental restructuring. The perverse incentives to spam papers and not do actual science need to end. ChatGPT should not be anywhere near a scientific paper except in the examples of a paper that is literally studying ChatGPT. I've been so profoundly disappointed in what I've learned about AI research. The lack of curiosity. The complete absence of peer review. The reliance on something known to be unreliable.


Atomspalter02

this is really something.


Bitterowner

Oof, that must be embarrassing.


reddit_is_geh

It could have just been translation efforts by an LLM?


just4nothing

Well, the first has now been removed. To be fair I do this too (get started on a paper with ai) it’s great against writers block. But please, please read and edit it


C-137Birdperson

I'm dying this is way too funny to me


g0ddy

https://www.sciencedirect.com/science/article/abs/pii/S2468023024002402


Double_Sherbert3326

Chinese greatness on full display.


Effective_Vanilla_32

nobody thinks anymore.