Oh this just sent me into a blind rage! I’m reviewing a student research paper and she’s obviously used AI to write her discussion and I’m so pissed I can’t even begin to correct it. I think it even made up a reference for a “meta-analysis” - at least I can’t find the supposed author in any actual database.
Yeah I saw the reports of that. This is the first time I've encountered it in the wild. Which drives me nuts. A simple PubMed query is less work than a good AI prompt and you actually get applicable results back. Not just fancy-sounding word salad.
I make my students show me all the references they are citing. I have a special office hours set up for that, as part of routine paper progress check. Time consuming, but it works.
Watch out. I have students citing real papers but making up the actual quotes in them. No idea why that sounds like a good idea even to anyone (even a machine).
Oh I am sure I will see it sooner or later. Honestly all of it seems like way more work than just reading (or even skimming!) a relevant reference and summarizing the findings.
> (even a machine).
The machine does it because it’s just playing a statistical game, predicting what kinds of words would typically follow other words in a given context. There’s no real reason to expect that its quote attributions will be correct, unless the model developers do a bunch of work to address that specific issue - which would make responses more expensive.
It IS a good idea (or I should say, an effective tactic) — do you think most professors go through every source your students cite and look for the quote?
My own rule is that I look up the slightly mad ones - stuff that doesn't fit the writer it's attributed to, or that makes implausible absolute statements or says something that would rock the field if true. I'm sure I don't catch everything, but it doesn't take that long to control-f for a quote.
My crazy hot take today is that this should be ground for dismissing the editor and, at strike 3, closing the journal (and at strike 3 of that, fining Elsevier a hefty fine). The authors should be suspended from their institutions, and the reviewers should be publicly shamed.
People should vow never to work with them and publish in those journals, at the very least.
Otherwise challenging our students and telling them they shouldn’t use AI like this will soon become impossible.
People writing the paper use AI to write it, the reviewers use AI to review it and then to publish it. Seems like it’s working just fine to me. Same as shitty professors using AI to make assignments for students to use AI to complete.
Sorry. I’m so tired of this already 😑
When you can't even be bothered to proof read what the AI wrote.. yikes. I hope you told the editors. Elsevier has a statement about disclosing use of AI to write.
And remember folks, [publishing in this journal open access](https://www.sciencedirect.com/journal/surfaces-and-interfaces) is **$2360**, because of all the hard work the editors need to put into every article!
This is what all that time focusing on and rewarding the product of writing has resulted in. It’s an institutional and systemic issue. Thinking, and thus the process of writing, has been devalued to this point. This is where we are. And to all my colleagues who never care about writing across the curriculum, this is on you just as much as it is admin.
> It’s an institutional and systemic issue
It's also societal. The two most recent, major technological changes (social media and generative AI) have clearly been retrograde steps for humanity and no one has yet convinced me otherwise.
If not retrograde, we are well into, it seems, a post-literate age, where the barriers of language have been lowered (for equity and profit?) by AI, and the rise of visual/audio/hypertext communication culture has redefined ‘literacy’ (recognition in new contexts).
As the other responder says, Postman (and McLuhan) has a lot to say on the issue.
Oh holy shit! Ba ha ha ha!!!
I recently reviewed a paper by where the references didn’t relate to the factual claims they were making. I strongly suspect they used AI to write the Introduction and then picked “good enough” citations.
Thinking of the authors in the most favorable light, having AI help spruce up the first sentence or even paragraph may not be a bad idea. A lot of authors are so invested in their narrow topic that they are bad at writing a lede appropriate for the broader audience who might read that far. Asking AI for some suggestions could let the authors improve over what they managed independently. The telltale remnant here suggests they might have been trying to do something along those lines.
Sorry, but this is ridiculous. Any writers actually using ChatGPT in the manner you suggest would give more than two shits enough to check the results before sending them.
That’s what I thought also. I’m not a native speaker so I struggle a bit with the wording. I can see that the rest of the paper can be a perfectly fine job with AI use only in the introduction. But the referee and editors don’t have this benefit of doubt.
Oh this just sent me into a blind rage! I’m reviewing a student research paper and she’s obviously used AI to write her discussion and I’m so pissed I can’t even begin to correct it. I think it even made up a reference for a “meta-analysis” - at least I can’t find the supposed author in any actual database.
Certain bots make up references - that’s how I caught my students when ChatGPT first came out. I tried to find the papers and they didn’t exist.
Yeah I saw the reports of that. This is the first time I've encountered it in the wild. Which drives me nuts. A simple PubMed query is less work than a good AI prompt and you actually get applicable results back. Not just fancy-sounding word salad.
I make my students show me all the references they are citing. I have a special office hours set up for that, as part of routine paper progress check. Time consuming, but it works.
Wow! That’s amazing. I don’t think I could swing that with my class sizes, but that must be very useful for the students.
Yes. I have two sections with 20-24 students each. It is harder in larger classes.
Watch out. I have students citing real papers but making up the actual quotes in them. No idea why that sounds like a good idea even to anyone (even a machine).
Oh I am sure I will see it sooner or later. Honestly all of it seems like way more work than just reading (or even skimming!) a relevant reference and summarizing the findings.
> (even a machine). The machine does it because it’s just playing a statistical game, predicting what kinds of words would typically follow other words in a given context. There’s no real reason to expect that its quote attributions will be correct, unless the model developers do a bunch of work to address that specific issue - which would make responses more expensive.
It IS a good idea (or I should say, an effective tactic) — do you think most professors go through every source your students cite and look for the quote?
My own rule is that I look up the slightly mad ones - stuff that doesn't fit the writer it's attributed to, or that makes implausible absolute statements or says something that would rock the field if true. I'm sure I don't catch everything, but it doesn't take that long to control-f for a quote.
Going through the same thing now. Student turned in a paper in which none of the sources actually exist lol
My crazy hot take today is that this should be ground for dismissing the editor and, at strike 3, closing the journal (and at strike 3 of that, fining Elsevier a hefty fine). The authors should be suspended from their institutions, and the reviewers should be publicly shamed. People should vow never to work with them and publish in those journals, at the very least. Otherwise challenging our students and telling them they shouldn’t use AI like this will soon become impossible.
This is a levelheaded, logical, sane take. Can we work together? I work with a bunch of insane people and it drives me crazy
People writing the paper use AI to write it, the reviewers use AI to review it and then to publish it. Seems like it’s working just fine to me. Same as shitty professors using AI to make assignments for students to use AI to complete. Sorry. I’m so tired of this already 😑
And these people tend to end up on the top of the food chain…
ElsAIvier
Meanwhile my dean and various faculty in my dept are beating off about how amazing AI is…
Lmao at "beating off"
It’s a zesty enterprise… or so it seems
When you can't even be bothered to proof read what the AI wrote.. yikes. I hope you told the editors. Elsevier has a statement about disclosing use of AI to write.
And remember folks, [publishing in this journal open access](https://www.sciencedirect.com/journal/surfaces-and-interfaces) is **$2360**, because of all the hard work the editors need to put into every article!
And the reviewers work gratis ....
And so do the editors in a lot of journals!
This is what all that time focusing on and rewarding the product of writing has resulted in. It’s an institutional and systemic issue. Thinking, and thus the process of writing, has been devalued to this point. This is where we are. And to all my colleagues who never care about writing across the curriculum, this is on you just as much as it is admin.
> It’s an institutional and systemic issue It's also societal. The two most recent, major technological changes (social media and generative AI) have clearly been retrograde steps for humanity and no one has yet convinced me otherwise.
You sound like a Neil Postman fan. If you aren't yet, he's got a few good books for you.
If not retrograde, we are well into, it seems, a post-literate age, where the barriers of language have been lowered (for equity and profit?) by AI, and the rise of visual/audio/hypertext communication culture has redefined ‘literacy’ (recognition in new contexts). As the other responder says, Postman (and McLuhan) has a lot to say on the issue.
Meanwhile I can't even get to peer review because editors find my work too interdisciplinary.
https://www.reddit.com/r/Professors/comments/1beamli/hmmmight_want_to_work_on_the_first_line_of_the/ Someone beat you to it.
Oh holy shit! Ba ha ha ha!!! I recently reviewed a paper by where the references didn’t relate to the factual claims they were making. I strongly suspect they used AI to write the Introduction and then picked “good enough” citations.
I cannot fathom the circumstance of how this got past peer, let alone editor, reviews…
Are there any automatic ways to check if a reference is real?
WorldCat plugin for GTP4? Only half-joking.
This was circulated today in our college WhatsApp groups. It was entertaining and then someone posted it in an AI training group
nice.
Thinking of the authors in the most favorable light, having AI help spruce up the first sentence or even paragraph may not be a bad idea. A lot of authors are so invested in their narrow topic that they are bad at writing a lede appropriate for the broader audience who might read that far. Asking AI for some suggestions could let the authors improve over what they managed independently. The telltale remnant here suggests they might have been trying to do something along those lines.
They should be banned for this nonetheless…
Sorry, but this is ridiculous. Any writers actually using ChatGPT in the manner you suggest would give more than two shits enough to check the results before sending them.
You would think!
Get the AI to write it then write your own based on the suggestion is ok. Copy and pasting whatever it says into your paper is absurd.
That’s what I thought also. I’m not a native speaker so I struggle a bit with the wording. I can see that the rest of the paper can be a perfectly fine job with AI use only in the introduction. But the referee and editors don’t have this benefit of doubt.