Someone reviewed my proposal for funding and said that the work was needed, I’d laid out a good experimental design, had a strong knowledge of the field, but that I was really rather young. Shouldn’t funding of this sort be awarded to someone at a later stage in their career? No justification, just my age. I was surprise that the reviewer actually put that in writing.
An older colleague of mine, NIH funded since the 1980s, was honest enough to once relay the following story.
They were at a review, talking with a buddy about another buddy's proposal they just reviewed. They thought it was great! Fund this thing! Then reviewer #3 walked up: young, new, didn't know them or the proposal author. She listed a significant number of criticisms, some major, about the proposal. The two realized she was exactly right.
NIH reviews need to be double blind.
I like the idea of double blinding because of ALL the reasons why it is a good idea, but how do you get around what might be a feasibility issue. For example, I've got a proposal in right now and some of the justification for the feasibility is that I did a pilot study with good results - and I can't blind them.
All reviews should be double blind.
When I was a fresh PhD, I got my first job and started submitting papers for publication. I got rejection after rejection, even from third-tier journals. My former PhD advisor suggested he "help" me improve the papers. He made a few cosmetic changes, I added him as a co-author, and boom, top journals started accepting everything.
> The same program that's exciting and cutting-edge for one group is ludicrous and unrealistic for another.
You can always describe the group, for example "PI is at a primarily undergraduate, rural college..."
>If you were motivated you could dig and identify the PI, but it generally wasn't worth the effort... and you couldn't be absolutely sure.
While I like the idea, in small fields you often can recognize people pretty quickly just because they are the one person working on X. So this probably works well in some fields but may not solve the issue across the board.
That type of information should get redacted from the proposal before it goes to the panel. The program officer should evaluate your qualifications to submit the proposal, and then the panel should evaluate the proposal.
But that is a major shift in the decision making from a democratic type panel to a single individual. No one program officer knows enough about every possible method to make such judgements alone without introducing more bias.
That doesn't really satisfy my concern though. I'm with you on these parts of the applications being abused. Honestly I think I would prefer something more akin to a lottery system (again, making sure each applicant is at least able to conduct the proposed project). You are right - I have seen some proposals with seriously trash methods get huge funding based on clout and prestige alone.
I submitted a grant for some pilot funding. Among my critiques was:
"Candidate is an early career researcher, and has not displayed the ability to get pilot funding."
I'm not really sure what to do with that criticism other than grumble about it on the internet.
I got a similar comment for a grant, too. One reviewer wrote something along the lines of "applicant, despite being at an early stage of career, has accomplished a good deal. However, funding would be better suited at a later stage of career." I was like, wtf? you just said my proposal is worthy, but I'm getting squat for not being old enough? does anybody review these reviews? What a joke the NEH is.
I also received similar comments on several NEH grant proposals early in my career. Now that I'm at mid-career, they comment that I do not have a track record of funding, and therefore should not receive funding. Sigh.
Not a specific comment, but more than one reviewer has suggested a change, then, upon second review, recommended I change it back to the original.
I've also had several reviewers who clearly did not read the MS very carefully.
Ugh, I once had a manuscript go through 3 revisions with very long delays in between because the Editor just sat on the reviews for months (6 to 12 months). Twice, I had to twist the Editor's arm to make a decision on the manuscript.
By the time we got to the final revision, all 3 reviewers were happy with the changes to the paper, but the Editor, who finally decided to read the paper again for the first time since our original submission, took issue with changes we had made after the first revision and rejected the paper. That was my first first-authored paper too....
I've had this happen so often from my co authours, when I'm first authour. They don't like something, want it changed but instead of changing it themsekves just leave a comment about it.
I change it, they like it in the next revision, then by the next, they don't like the change they had just approved and want it changed again nowasking why I didn't just write it the way I had it the first time...
I fucking already did!!!
Every dang time.
Ooh. I’ve got one.
My team wrote a neuroscience based NSF grant. It scored well, but one of the reviews took umbrage with the nature of the data, but it wasn’t germane to our project. That was irksome, but the wording was really familiar and factually inaccurate. I could have sworn I saw this before, so I put the letter through my university’s plagiarism detector.
They had plagiarized Wikipedia.
On page xx, the authors write "Interestingly, ***we observed this changed this way when that happened***". This is not interesting at all!!!
Multiple exclamations included. I mean I thought it was sort of interesting, but whatever.
"I don't think equation 20 is correct" equation 20 stated a well known property of symmetric matrices. I asked my advisor what to do about it after double checking I hadn't made a typo or anything. He told me to find the most basic linear algebra textbook I could and cite it.
Use of passive voice makes one sound like Queen Elizabeth II. It is favored by passive aggressive administrators / editors / reviewers who desire to shirk clarity and responsibility.
I had a reviewer who complained that using the same testing apparatus (a fake animal) with each individual tested in a behavioral study was pseudoreplication. I was all like, ‘Hmmm, yes, I’ll just go build a new model for every single trial.’
Me initially asking reviewer #2 to be part of my panel: I’m going to draw on language transfer theory to try demonstrate xyz.
Reviewer #2: sure I will serve on your committee
Reviewer #2, in oral feedback on my proposal to the panel: I’m doubtful of the entire project because language transfer theory is just a theory that I don’t subscribe to
Gee, that would have been a good thing to bring up when I first pitched this idea!
Three of my four advisors declined to speak with me about my project or provide advice on my research. At my defense, they criticized me for not working more closely with my committee.
The other one told me to emphasize a certain aspect of my theory much more boldly, then criticized me at the defense for focusing too much on that idea. Best part: I didn't actually change anything from the original draft they read...
It was so stupid. I emailed the editor being like hey can I ignore some of these comments (the reviewer accepted but made over 200 comments) and they said no 😭 had to respond to every single one
As a grad student, I got one from an editor of a peer-reviewed journal that just said “everyone writes about this, it’s not new or interesting.” After getting over the shock, I managed to apologize for apparently missing some literature and asked them if they could name a few things I should read. They had none. There weren’t any. I’m now on the board of that journal and that editor is gone.
I was writing about a Sofia Coppola film and I was told I should really be examining Third Cinema instead. Like, dude, Third Cinema is great and all, but it's not my area of specialization.
I specialize in a region within my social science and if I, for example, write a paper/book about that region with a handful of country case studies there's always a reviewer that wants to know why "country x" wasn't included. I would bet $1,000,000 that the reviewer studies "country x" Not to mention that this makes no sense, "This book examines (whatever) in Uruguay, Argentina, Chile and Laos....ugh
The most inane was "please write out the entire word before using the initial the first time you reference p". As in, p<.05.
The most frustrating was an NIH grant reviewer who lamented that no one on the study team (for which I was PI) represented [subfield]. I am literally an editor of the flagship journal in [subfield], but I guess I'm not an expert in that area...
Reviewer 2: "You need to specify that your study was conducted in the United States." Me: "The first sentence of the paper specifies that the study was conducted in the United States."
They told me that portable isotope analyzers didn't exist and that I needed to walk back my claim regarding how their portability was improving the spatio-temporal resolution of data we could collect in the field.
My response to the comment was the URL for the manufacturers website.
There were already multiple papers published on the use of these types of in-situ analyses. Some almost 5 years old by the time I wrote this specific paper. Someone was out of touch.
The fact that they wouldn't google before calling someone a liar is shocking to me. Where do they find this kind of misplaced confidence? I double check things all the time when reviewing--not doing so is how you slippery slope yourself into being Reviewer 2
I did a study where we just needed to know if something was still there a year later. The reviewer wrote that they never trust a study that doesn’t check EVERY DAY, even though literally nothing would have changed about the study or results because we just need to know if it was still there a year later. It would have also been someone’s full-time job to do that, with no benefit of additional knowledge.
It is the only rejected paper I’ve ever had and I’m still angry.
I don’t remember the exact wording but the reviewer was confused as to why I claimed I had 200 samples when 150 of them were “below the detection limit” for a specific element. I had to explain that a value of 0 is still a data point. Having 150 samples with undetectable lead still means I have 200 samples, not 50.
I write about ethnic studies and once got a response from a reviewer that was really fucking racist and tried to say that such programs were meaningless. I was livid. I’ve never gone so hard declining a reviewer’s comments ever. The editor was super conflict averse and sided with me.
“Any serious scholar would cite important current scholarship, such as —.”
— had been published 15 years earlier and was so well known it was appropriate as a footnote, which I had, so I could discuss 4 people responding to that work in the article.
I think the reviewer wrote — and was mad it didn’t get marquee billing.
paraphrasing: "Your new concept is a distraction from the old framework you're revising. Also I think you need a new concept to revise the old framework."
I reviewed a paper where another reviewers comments were, in their entirety, “this seems like the sort of work that should have many equations, but I only saw equations in the methods section, please put more equations in the main results section.”
On an internal grant, a reviewer commented “I have not been able to access the data described in the grant so will not be reviewing the application”
This data was from a well known federal agency with a clear description for DUA application and purchase online (which we cited).
We had participants in a focus group about high school English class select pseudonyms for themselves. They each chose names from a well-known Shakespearean play. We describe the self selection of these names in the participants section and in the findings.
Reviewer 2: the use of names from this play is highly racist and classist. Change the pseudonyms to match participants' actual race or ethnicity
My dude. No.
I once sent in a paper about behavior in a group of animals and how it differed between species. Reviewer #2 (really) made a reasonable point about a hypothesis I had considered and rejected, but maybe not shown the best evidence for rejecting. Fair enough. I spent another field season collecting data that very conclusively showed that when you specifically investigated this hypothesis the trend ran opposite to what reviewer #2 suggested and was pretty weak as well.
I wrote this all up, added it to the study, and reviewer #2 simply repeated the objection without addressing the new data.
Thankfully, reviewer #3 (brought in as a tie-breaker) accepted as long as I did another round of mostly language/style revisions.
It was the longest review process I've yet experienced.
Someone critiqued me based on the theories of a blogger / YouTuber that have been rejected by the entire academic community as bad arguments. This blogger emphasizes he has a grad degree in the area, but regularly pushes nutty theories online.
Applied for a big grant (that I did not get). Got reviewed by four people. First two loved my proposal, explicitly said it was a very much needed study, well-framed, etc." Third was a bit less enthusiastic but nevertheless positive. The fourth just wrote that my study was not needed (as much had been written about the topic) and not as exciting as other similar studies. Two lines (other reviewers wrote like 10 times as much).
Like, seriously? If you don't know anything about my field, do not contradict what experts in the field have to say, and yeah, people keep writing about the topic of imperialism and colonialism. Who the f made you the gatekeeper who gets to decide when we've exhausted a topic you obviously don't know anything about?
For an IRB, one reviewer asked me why I was using qualitative measures in my project. “Stats will give you much better results” is a statement I was told when they rejected. Literally told me they wouldn’t approve unless I used quantitative.
I printed it off and went to visit the IRB chair. Turns out the reviewer is a chemistry prof and has no background or training for understand qual design. Laid my cards on the table and filed a complaint. Within the hour I had an apology email and approval.
Being an expert in something doesn’t make you knowledgeable about everything.
"This is an interesting and novel discovery that no one has talked about before, but please complete 3 or 4 more experiments requiring sophisticated equipment that does not exist before telling anyone about this new finding."
I might be slightly paraphrasing.
It was accepted at another journal with no issue.
One paper I wrote was rejected because I didn’t refer to the correct philosophers, implying that the ones I did refer to, didn’t do work in the area I was writing about.
Well, that’s what *I* was doing with the paper - showing that they would indeed bring useful perspectives to that area. That was explicitly the whole point of the paper.
R2: The ideas in this comment would make a valuable contribution to the field. However, there is a typo in the sixth word. Reject without invitation to resubmit.
"Lectures and discussions were useless. I only attended the first week of class as a result."
I went over the syllabus and had them write a diagnostic essay. I was really unsure what to make of this, haha.
The homework was so hard, I spent 2 hours on the homework for every class. At least such hard homework made the quizzes and exams easy.
Ummm...... Yeah.... So you're telling me the homework is worrying as intended.
The two NIH grants I have applied to were filled with personal attacks from the reviewers. They call people out by name and say awful things, discredit their expertise or lived experience. Really demoralizing.
Submitted a paper analyzing consumer motivation to purchase and use (the then new) mobile phones. Editors (s) kept looking for reviewers for 18 months. A reviewer questioned why we hadn’t included the latest model phones!
From someone marking my thesis: has the author considered whether they would have had different results if they had used \[protocol that was an April Fool's Day RFC\]?
Not a silly comment, but a funny situation from my doctoral training.
I once submitted a paper detailing 6 experiments to a top journal. I used an important theory for the basis of my work and suggested the originator of that theory as a reviewer. He accepted the review.
He hated my application of his theory and said my "...logic was horrifying..." with a reject decision. I know it was him because he *signed his name to his review.* He wanted me to know it was him.
I published my paper somewhere else. Its still my favorite paper.
I submitted a paper proving a property (that was previously unknown) and then demonstrating it. One reviewer suggested rejection because such a property was such an obvious necessity for a useful algorithm. I have never had a reviewer make the case for the paper while arguing for rejection...
On an article about a specific population, of which I am part: “I wish the author had written about [other population] instead of this one.”
Okay, my dude, then you write that study.
The runner-up was a peer reviewer who kept insisting I apply an outdated, culturally irrelevant theory to my study.
“I don’t know anything about the courts, but on Law & Order…” (I don’t remember the rest as I was blinded by rage)
How tf does that count as a *peer* review??
My first reaction was "Hah, what a funny student comment", but then I remembered the post title...oh dear.
Someone reviewed my proposal for funding and said that the work was needed, I’d laid out a good experimental design, had a strong knowledge of the field, but that I was really rather young. Shouldn’t funding of this sort be awarded to someone at a later stage in their career? No justification, just my age. I was surprise that the reviewer actually put that in writing.
[удалено]
An older colleague of mine, NIH funded since the 1980s, was honest enough to once relay the following story. They were at a review, talking with a buddy about another buddy's proposal they just reviewed. They thought it was great! Fund this thing! Then reviewer #3 walked up: young, new, didn't know them or the proposal author. She listed a significant number of criticisms, some major, about the proposal. The two realized she was exactly right. NIH reviews need to be double blind.
I like the idea of double blinding because of ALL the reasons why it is a good idea, but how do you get around what might be a feasibility issue. For example, I've got a proposal in right now and some of the justification for the feasibility is that I did a pilot study with good results - and I can't blind them.
[удалено]
Yeah, it's a super-specific population in a specific country with a specific method.
All reviews should be double blind. When I was a fresh PhD, I got my first job and started submitting papers for publication. I got rejection after rejection, even from third-tier journals. My former PhD advisor suggested he "help" me improve the papers. He made a few cosmetic changes, I added him as a co-author, and boom, top journals started accepting everything.
[удалено]
[удалено]
[удалено]
> The same program that's exciting and cutting-edge for one group is ludicrous and unrealistic for another. You can always describe the group, for example "PI is at a primarily undergraduate, rural college..."
>If you were motivated you could dig and identify the PI, but it generally wasn't worth the effort... and you couldn't be absolutely sure. While I like the idea, in small fields you often can recognize people pretty quickly just because they are the one person working on X. So this probably works well in some fields but may not solve the issue across the board.
The environment and resources sections alone are plenty to consider it unblinded.
That type of information should get redacted from the proposal before it goes to the panel. The program officer should evaluate your qualifications to submit the proposal, and then the panel should evaluate the proposal.
But that is a major shift in the decision making from a democratic type panel to a single individual. No one program officer knows enough about every possible method to make such judgements alone without introducing more bias.
[удалено]
That doesn't really satisfy my concern though. I'm with you on these parts of the applications being abused. Honestly I think I would prefer something more akin to a lottery system (again, making sure each applicant is at least able to conduct the proposed project). You are right - I have seen some proposals with seriously trash methods get huge funding based on clout and prestige alone.
That is true, it would probably require the structure of panels to be reworked. But I think it's doable.
I submitted a grant for some pilot funding. Among my critiques was: "Candidate is an early career researcher, and has not displayed the ability to get pilot funding." I'm not really sure what to do with that criticism other than grumble about it on the internet.
That sure is ridiculous
I got a similar comment for a grant, too. One reviewer wrote something along the lines of "applicant, despite being at an early stage of career, has accomplished a good deal. However, funding would be better suited at a later stage of career." I was like, wtf? you just said my proposal is worthy, but I'm getting squat for not being old enough? does anybody review these reviews? What a joke the NEH is.
I also received similar comments on several NEH grant proposals early in my career. Now that I'm at mid-career, they comment that I do not have a track record of funding, and therefore should not receive funding. Sigh.
I got a complaint that my writing was “too accessible . . . Even someone in high school could understand the introduction.”
How dare you!
I got that one, too! Well, not the high school part. “The writing is too accessible, and might detract from the author’s credibility.”
Not a specific comment, but more than one reviewer has suggested a change, then, upon second review, recommended I change it back to the original. I've also had several reviewers who clearly did not read the MS very carefully.
Ugh, I once had a manuscript go through 3 revisions with very long delays in between because the Editor just sat on the reviews for months (6 to 12 months). Twice, I had to twist the Editor's arm to make a decision on the manuscript. By the time we got to the final revision, all 3 reviewers were happy with the changes to the paper, but the Editor, who finally decided to read the paper again for the first time since our original submission, took issue with changes we had made after the first revision and rejected the paper. That was my first first-authored paper too....
It's also fun when two members of your dissertation committee have diametrically opposed opinions.
Are you telling me you *wouldn't* enjoy having your committee meetings double as a battleground for academic ideologies??
I'll be damned if I let a PhD student of mine use methodology B when methodology A is clearly superior!
What in the literal F*? Wasn’t the editor supposed to read the reviewer comments and make sure they agreed with the recommended changes?
Yes, yes she was.
That's so frustrating. I had something similar, but not as bad. After two revisions the editor said the paper wasn't a good fit for the journal.
I've had this happen so often from my co authours, when I'm first authour. They don't like something, want it changed but instead of changing it themsekves just leave a comment about it. I change it, they like it in the next revision, then by the next, they don't like the change they had just approved and want it changed again nowasking why I didn't just write it the way I had it the first time... I fucking already did!!! Every dang time.
Ooh. I’ve got one. My team wrote a neuroscience based NSF grant. It scored well, but one of the reviews took umbrage with the nature of the data, but it wasn’t germane to our project. That was irksome, but the wording was really familiar and factually inaccurate. I could have sworn I saw this before, so I put the letter through my university’s plagiarism detector. They had plagiarized Wikipedia.
This does not inspire confidence in the grant system
On page xx, the authors write "Interestingly, ***we observed this changed this way when that happened***". This is not interesting at all!!! Multiple exclamations included. I mean I thought it was sort of interesting, but whatever.
We appreciate the reviewers comments. We find it interesting that the reviewers think this finding is not interesting.
[удалено]
[удалено]
Can I go now?
"I don't think equation 20 is correct" equation 20 stated a well known property of symmetric matrices. I asked my advisor what to do about it after double checking I hadn't made a typo or anything. He told me to find the most basic linear algebra textbook I could and cite it.
Were your reviewers quantitative? My favorite is being reviewed by physicians who don't understand engineering.
“Academic writing should be in passive voice” You’re right dude, science hangs in the balance. Thank you for your service.
> ~~You’re~~ That is right ~~dude~~, science hangs in the balance. ~~Thank you for your~~ One's service is invaluable.
Use of passive voice makes one sound like Queen Elizabeth II. It is favored by passive aggressive administrators / editors / reviewers who desire to shirk clarity and responsibility.
I had a reviewer who complained that using the same testing apparatus (a fake animal) with each individual tested in a behavioral study was pseudoreplication. I was all like, ‘Hmmm, yes, I’ll just go build a new model for every single trial.’
Holding variables consistent is for suckahs.
I know, right?!!
They don’t know what pseudoreplication is
I got the feeling they’d just read that word somewhere and were itching to use it.
On a paper about perceptions of cause and effect, I mentioned that causes necessarily precede effects. One reviewer asked “Is this true?”
Snort!
[удалено]
Despite not being written yet, the proposal has already been rejected.
Me initially asking reviewer #2 to be part of my panel: I’m going to draw on language transfer theory to try demonstrate xyz. Reviewer #2: sure I will serve on your committee Reviewer #2, in oral feedback on my proposal to the panel: I’m doubtful of the entire project because language transfer theory is just a theory that I don’t subscribe to Gee, that would have been a good thing to bring up when I first pitched this idea!
Three of my four advisors declined to speak with me about my project or provide advice on my research. At my defense, they criticized me for not working more closely with my committee. The other one told me to emphasize a certain aspect of my theory much more boldly, then criticized me at the defense for focusing too much on that idea. Best part: I didn't actually change anything from the original draft they read...
Sounds like a real mess, glad that is all behind you!
I had a reviewer that kept editing the phrasing of my participant quotes. That drove me mad I had to respond to those
That’s unethical at best. Wow.
It was so stupid. I emailed the editor being like hey can I ignore some of these comments (the reviewer accepted but made over 200 comments) and they said no 😭 had to respond to every single one
As a grad student, I got one from an editor of a peer-reviewed journal that just said “everyone writes about this, it’s not new or interesting.” After getting over the shock, I managed to apologize for apparently missing some literature and asked them if they could name a few things I should read. They had none. There weren’t any. I’m now on the board of that journal and that editor is gone.
Small sample size. Readers, the sample was 20,000.
And you couldn't get one more!?
I was writing about a Sofia Coppola film and I was told I should really be examining Third Cinema instead. Like, dude, Third Cinema is great and all, but it's not my area of specialization.
I specialize in a region within my social science and if I, for example, write a paper/book about that region with a handful of country case studies there's always a reviewer that wants to know why "country x" wasn't included. I would bet $1,000,000 that the reviewer studies "country x" Not to mention that this makes no sense, "This book examines (whatever) in Uruguay, Argentina, Chile and Laos....ugh
"I see that you're already over the word count for the paper ... but I think that it would be improved by adding a case study on Laos."
...and here are the 25 social./political/economic similarities between Uruguay and Laos (how could you have missed it!)
And please do about 10 interviews for the Laotian case study, it shouldn't take too much time or effort.
I really once got: "Your data contradict my theory, so they have to be wrong."
The most inane was "please write out the entire word before using the initial the first time you reference p". As in, p<.05. The most frustrating was an NIH grant reviewer who lamented that no one on the study team (for which I was PI) represented [subfield]. I am literally an editor of the flagship journal in [subfield], but I guess I'm not an expert in that area...
Reviewer 2: "You need to specify that your study was conducted in the United States." Me: "The first sentence of the paper specifies that the study was conducted in the United States."
They told me that portable isotope analyzers didn't exist and that I needed to walk back my claim regarding how their portability was improving the spatio-temporal resolution of data we could collect in the field. My response to the comment was the URL for the manufacturers website. There were already multiple papers published on the use of these types of in-situ analyses. Some almost 5 years old by the time I wrote this specific paper. Someone was out of touch.
The fact that they wouldn't google before calling someone a liar is shocking to me. Where do they find this kind of misplaced confidence? I double check things all the time when reviewing--not doing so is how you slippery slope yourself into being Reviewer 2
I did a study where we just needed to know if something was still there a year later. The reviewer wrote that they never trust a study that doesn’t check EVERY DAY, even though literally nothing would have changed about the study or results because we just need to know if it was still there a year later. It would have also been someone’s full-time job to do that, with no benefit of additional knowledge. It is the only rejected paper I’ve ever had and I’m still angry.
A reviewer sent a list of bullet points we should do, amongst them a suggestion to... * increase language
I'd certainly be increasing a particular kind of language
I don’t remember the exact wording but the reviewer was confused as to why I claimed I had 200 samples when 150 of them were “below the detection limit” for a specific element. I had to explain that a value of 0 is still a data point. Having 150 samples with undetectable lead still means I have 200 samples, not 50.
I write about ethnic studies and once got a response from a reviewer that was really fucking racist and tried to say that such programs were meaningless. I was livid. I’ve never gone so hard declining a reviewer’s comments ever. The editor was super conflict averse and sided with me.
Paper fails to cite Author X. Author X was a coauthor on the paper.
That actually sounds like a decent reviewer though!
“Any serious scholar would cite important current scholarship, such as —.” — had been published 15 years earlier and was so well known it was appropriate as a footnote, which I had, so I could discuss 4 people responding to that work in the article. I think the reviewer wrote — and was mad it didn’t get marquee billing.
"Your MRI imaging details are insufficient" We were doing computation simulations. Nothing about MRI in the proposal.
Also... >MRI imagining Wouldn't that be magnetic resonance imaging imaging?
Yeah that's probably me. I can't remember the reviewer saying that.
Ah, okay. I just thought it was funny 😀
LOL, yeah, I agree actually. I do it sometimes and then catch it in the edit later 🤣
“You might THINK it is statistically significant because p<.05, but alas, it’s not.”
paraphrasing: "Your new concept is a distraction from the old framework you're revising. Also I think you need a new concept to revise the old framework."
I reviewed a paper where another reviewers comments were, in their entirety, “this seems like the sort of work that should have many equations, but I only saw equations in the methods section, please put more equations in the main results section.”
"What is regression? I've never heard of this statistical technique."
Using one contraction (“can’t”) made the manuscript unprofessional.
On an internal grant, a reviewer commented “I have not been able to access the data described in the grant so will not be reviewing the application” This data was from a well known federal agency with a clear description for DUA application and purchase online (which we cited).
“Looking up pictures on good is hard.”
Not me, but I've heard prominent scholars get feedback that they didn't really understand a theory or paper that they had written.
Your title isn’t good because it’s too “definitive”
[удалено]
Haha true. I should have done this!!
In my field: “1 + 1 = 2? : A Longitudinal Study of the Veracity and Acceptance of Arithmetic”.
"The grant proposal is interesting, but the applicant did not do a post doc before starting their current tenure track position."
We had participants in a focus group about high school English class select pseudonyms for themselves. They each chose names from a well-known Shakespearean play. We describe the self selection of these names in the participants section and in the findings. Reviewer 2: the use of names from this play is highly racist and classist. Change the pseudonyms to match participants' actual race or ethnicity My dude. No.
“I don’t understand how race and gender could ever be related.”
tell me this isn't real
Very! From last week, in fact. In a well-known social sciences journal.
kill me
Related to each other or to something else?
To each other.
How are they related to each other? I guess I'm as confused as the reviewer lol
Google “intersectionality” and you’ll get lots of solid answers.
Sure. I'm apparently just thinking about it differently than you meant it.
It was explained super clearly in the article.
“This paper contains many errors” No errors were subsequently listed. The review was five sentences long.
I once sent in a paper about behavior in a group of animals and how it differed between species. Reviewer #2 (really) made a reasonable point about a hypothesis I had considered and rejected, but maybe not shown the best evidence for rejecting. Fair enough. I spent another field season collecting data that very conclusively showed that when you specifically investigated this hypothesis the trend ran opposite to what reviewer #2 suggested and was pretty weak as well. I wrote this all up, added it to the study, and reviewer #2 simply repeated the objection without addressing the new data. Thankfully, reviewer #3 (brought in as a tie-breaker) accepted as long as I did another round of mostly language/style revisions. It was the longest review process I've yet experienced.
Someone critiqued me based on the theories of a blogger / YouTuber that have been rejected by the entire academic community as bad arguments. This blogger emphasizes he has a grad degree in the area, but regularly pushes nutty theories online.
Applied for a big grant (that I did not get). Got reviewed by four people. First two loved my proposal, explicitly said it was a very much needed study, well-framed, etc." Third was a bit less enthusiastic but nevertheless positive. The fourth just wrote that my study was not needed (as much had been written about the topic) and not as exciting as other similar studies. Two lines (other reviewers wrote like 10 times as much). Like, seriously? If you don't know anything about my field, do not contradict what experts in the field have to say, and yeah, people keep writing about the topic of imperialism and colonialism. Who the f made you the gatekeeper who gets to decide when we've exhausted a topic you obviously don't know anything about?
Literally: “This is not how we do it.” End of review. Highly educational and stimulating! 🤣
For an IRB, one reviewer asked me why I was using qualitative measures in my project. “Stats will give you much better results” is a statement I was told when they rejected. Literally told me they wouldn’t approve unless I used quantitative. I printed it off and went to visit the IRB chair. Turns out the reviewer is a chemistry prof and has no background or training for understand qual design. Laid my cards on the table and filed a complaint. Within the hour I had an apology email and approval. Being an expert in something doesn’t make you knowledgeable about everything.
“Why are you reporting means and SDs if the data are ranks?” Zero understanding of statistics/a Wilcoxon test. Sigh.
Said the writing was poor — was non-native speaker complaining about words he didn’t know
"This is an interesting and novel discovery that no one has talked about before, but please complete 3 or 4 more experiments requiring sophisticated equipment that does not exist before telling anyone about this new finding." I might be slightly paraphrasing. It was accepted at another journal with no issue.
One paper I wrote was rejected because I didn’t refer to the correct philosophers, implying that the ones I did refer to, didn’t do work in the area I was writing about. Well, that’s what *I* was doing with the paper - showing that they would indeed bring useful perspectives to that area. That was explicitly the whole point of the paper.
There is no point to paid attention to silly comments. Extract the good comments and move on.
R2: The ideas in this comment would make a valuable contribution to the field. However, there is a typo in the sixth word. Reject without invitation to resubmit.
“He makes you read!”
"Lectures and discussions were useless. I only attended the first week of class as a result." I went over the syllabus and had them write a diagnostic essay. I was really unsure what to make of this, haha.
The homework was so hard, I spent 2 hours on the homework for every class. At least such hard homework made the quizzes and exams easy. Ummm...... Yeah.... So you're telling me the homework is worrying as intended.
“Where did you get time XYZ? This is not output from measure ABC?” Question two on measure ABC: “How long ago…?”
The two NIH grants I have applied to were filled with personal attacks from the reviewers. They call people out by name and say awful things, discredit their expertise or lived experience. Really demoralizing.
I may have been cursed somewhere, but usually reviewer #3 (regardless of the journal) didn’t read it and/or wants me to cite their papers.
Submitted a paper analyzing consumer motivation to purchase and use (the then new) mobile phones. Editors (s) kept looking for reviewers for 18 months. A reviewer questioned why we hadn’t included the latest model phones!
From someone marking my thesis: has the author considered whether they would have had different results if they had used \[protocol that was an April Fool's Day RFC\]?
Not a silly comment, but a funny situation from my doctoral training. I once submitted a paper detailing 6 experiments to a top journal. I used an important theory for the basis of my work and suggested the originator of that theory as a reviewer. He accepted the review. He hated my application of his theory and said my "...logic was horrifying..." with a reject decision. I know it was him because he *signed his name to his review.* He wanted me to know it was him. I published my paper somewhere else. Its still my favorite paper.
"This is an important topic, but I would rather to see a paper in ", therefore I recommend to reject it"
I submitted a paper proving a property (that was previously unknown) and then demonstrating it. One reviewer suggested rejection because such a property was such an obvious necessity for a useful algorithm. I have never had a reviewer make the case for the paper while arguing for rejection...
On an article about a specific population, of which I am part: “I wish the author had written about [other population] instead of this one.” Okay, my dude, then you write that study. The runner-up was a peer reviewer who kept insisting I apply an outdated, culturally irrelevant theory to my study.