Based on this from the about page: "Submissions should be real sentences from reviews you received." they're intended to be real. But I think it's highly unlikely for some entries.
Anonymity in the review process has a lot of the same effects as anonymity on a site like this. It does tend to bring out the nastier side of people. These reviews are at the extreme end of the spectrum but many of them look entirely believable to me.
Yes, http://shitmyreviewerssay.tumblr.com/post/138673489984/no-ne... provides a source. It's non-public-access, like entirely too much of science, but I can confirm that my alma mater's subscription gives me access to an article that has the quote in question, and starts like this:
Our referees, the Editorial Board Members and ad hoc reviewers, are busy, serious individuals who give selflessly of their precious time to improve manuscripts submitted to Environmental Microbiology. But, once in a while, their humour (or admiration) gets the better of them. Here are some quotes from reviews made over the past year, just in time for the Season of Goodwill and Merriment.
• And here we go with the first 2009 comment: happy new environmental year dear editor! Regarding the manuscript: it is OK, I hope the flu is not infecting my review brain.
• WOW! You did ‘read it with interest’ in SEVEN MINUTES??!! [Ed.: this is an author contribution in response to an editorial decision (rejection) made within 7 min of submission]
[...]
• The writing style is flowery and has an air of Oscar Wilde about it.
"I found the use of the evolutionary theory problematic. This is a highly contested theory and the authors did not strongly justify their decision to use it, nor attend to some of the major flaws of the theory."
Oh FFS
But nothing outside what's "common" in academia (and I mean the other reviews as well)
When you read a review like that, you can only imagine the journal editor having an almost unconscious faceplam reaction before removing the name from the pool of potential reviewers.
I dont get why this is a joke. My guess is the paper probably reported the mean as having some significance in the context of how participants scored, and the professor is pointing out that this is misleading. In other words, they should have not reported the mean but the median.
Yes, arithmetic means are inherently misleading and everyone knows that. They summarize a quantity of information and conceal its important attributes like variance and skew.
The 7.7 figure was probably not being used to deliberately mislead.
The complainer hadn't in fact fact mislead; they had access to other data from the paper on which to base the complaint.
Basically it boils down to "you shouldn't have summarized such and such data using an arithmetic mean because I don't like means; whenever someone uses a mean, they are trying to deceive".
If "close to half" scored below the mean, then the mean and median are probably very close together.
I think the charitable interpretation would be that some conclusion in the paper hinges on the value of the mean, but that conclusion is not valid when you consider that values are more widely distributed, and the reviewer just stated it poorly.
This would be expected in a very Normal distribution, but there are some skewed statistical distributions where you would not expect that to be the case. For example, if you're doing an analysis of the following data:
1, 2, 3, 4, ..., 98, 1000000, 1000000 then the mean would be 2,004,851 / 100 = 20048.51. In that case, 98% of your sample points fall below the mean.
Right, but apparently this distribution is not particularly skewed, since close to half of the values are below the median. And since distributions where the mean and median line up are common and expected, it's really weird to say that the mean is misleading because of that. I suspect/hope the reviewer meant to say that the use of the mean is misleading because the lower values somehow don't support whatever conclusion was reached.
That's most likely, and I agree that it's likely in the case mentioned that the data is not skewed. However, the main reason I posted is because the post I was responding to appeared to make the assumption that nearly half of the data points will always be below the mean, and I was pointing out that there are some distributions where you should not expect that to be the case because they are skewed.
I made no such assumption. Why would you even assume that I expect this to always be the case? As already pointed out, it's pretty easy to make an assumption about this specific example since we're told that close to half of the values are below the median.
The spirit of charity led to me to notice that if the reviewer is referring to some very specific element of it that might be clear in the context of the paper, it could make sense. If, for instance, "the evolutionary theory" in question is "group selection" [1], the statement actually makes perfect sense. Stops being funny, of course.
A number of the reviews might actually make a lot more sense in context. But that still leaves a number of them that there is simply no context that makes them make sense.
For those who like to read these, be sure to read the hilarious "We are sorry to inform you" by Simone Santini [0]. It's quite hilarious, and illustrates most common tropes of bad reviewers.
"It reads like papers often do when they are written in LaTeX. Reject."
Would be hilarious if it weren't so true.
I had a student once who was 'encouraged to resubmit after having the manuscript checked for language by a native speaker'. Except that he was British and just happened to be working at a Spanish university for a local project. LOL. He's a very clear writer too, certainly top 10% in clarity of scientific writing. It just shows that it's a crap shoot to get your papers accepted.
I could guess that isn't uncommon if the critic isn't aware of the student's nationality. I have had the same issue, where having a vocabulary that exceeds the average English-as-a-2nd-language student makes your writing appear difficult to another non-native speaker.
University lecturers seem to have little patience for students who are better at something than they are, even if they could easily turn it on its head and say, "but I'm better at my native Spanish!"
interestingly enough, native speakers of neo-latin languages such as spanish, french and italian tend to have far less issues with educated vocabulary an average native english speaker would find hard to read and pompous.
A friend of mine (from the US, native speaker) spent a year in Japan as a schoolteacher under the JET program. We kept in touch via message-boards and such. His written English notably deteriorated over the course of the year. It's possible your friend picked up some non-native language habits during the course of his stay.
Most people don't utilize a IDE when writing a document in LaTeX that has spell-check or grammar. So it's much, much easier to insert oddities unless you do good proofreading.
I'm not a native speaker, but I have some US colleagues who write really poorly. I'm not saying this is the rule, nor that your friend was a bad writer, but the argument that a native speaker is de facto superior in the use of her own language is bogus. One of the very best pieces of English literature has been written by a Russian (Lolita by Nabokov) and there are many other examples. If a paper is written poorly I don't care if you're British: I'll point it out anyway.
> Enclosed is our latest version of Ms #85-02-22-RRRRR, that is, the re-re-re-revised revision of our paper. Choke on it. We have again rewritten the entire manuscript from start to finish. We even changed the goddamn running head! Hopefully we have suffered enough by now to satisfy even you and your bloodthirsty reviewers
...
> We hope that you will be pleased with this revision and will finally recognize how urgently deserving of publication this work is. If not, then you are an unscrupulous, depraved monster with no shred of human decency. You ought to be in a cage. May whatever heritage you come from be the butt of the next round of ethnic jokes. If you do accept it, however, we wish to thank you for your patience and wisdom throughout this process and to express our appreciation of your scholarly insights. To repay you, we would be happy to review some manuscripts for you; please send us the next manuscript that any of these reviewers submits to your journal.
The saddest part is that this kind of 'feedback' isn't out of the ordinary, but is actually pretty common. I've found in my own reviewer feedback a striking tendency towards pedanticism that would put Internet grammar nazis to shame, not to mention completely asinine comments that give no clue as to any objections the reviewer had ("You aimed for the bare minimum, and missed!").
This is, to me, just one example of the perverse incentives in academia. You are constantly pushed to publish, yet the gatekeepers who ultimately decide whether your research sees the light of day rarely give it more than a cursory glance, and many have no interest in giving helpful feedback to actually, you know, make the work better.
Bikeshedding isn't just for programmers. We just have a jargon term for it.
(And remember, "bikeshedding" isn't just "arguing forever": https://en.wiktionary.org/wiki/bikeshedding It's about arguing about the things that are easy to understand, to the exclusion of the probably-more-important, but more difficult, issues.)
There is a lot of terminology flung around here that is insufficiently explained. What exactly is meant, for example, by “false negative” and “false positive” rates? “median, first quartile, and third quartile”?
I really hope that the reviewer was referring to the context that the terms were being used in. There are some times when exactly what events constitute a false negative or false positive may be somewhat unclear.
Is this more common in the social sciences? I've never had a reviewer quote nearly as bad as these (computational biology).
There are plenty of bad / lazy reviewers, but I would think the editor should be filtering stuff like this if they want to keep the journal reputable. Or maybe they've written off these article at the outset?
I have reviewed more CS papers than I want to remember, and 80% of them are abysmal in quality. Most of them are papers that undergrads wrote for a class, that they then hope to publish on the off chance that it will be accepted. "Off chance" is the important part of that phrase.
Obviously some of the reviewer comments are out of line or show incompetence, but others are simply reflecting the effort that the authors put into the paper. It really is that bad at times!
Yeah, it is rather unfair to point your finger against a reviewer's comment without having a look at the paper too. I've come across papers for which some of these comments would be even too kind.
This blog is funnier if you're able to mentally classify the posts in four bins:
- Clearly clueless reviewer (e.g. "The reported mean of 7.7 is misleading because it appears that close to half of your participants are scoring below that mean")
- Reviewer just trying to keep the tone light in a funny way (e.g. "Unless the authors performed some clever pagan ritual before euthanizing the animals I would use ‘killed’ (or ‘euthanized’) instead of ‘sacrificed").
- Merciless and sarcastic destruction of what was very likely just garbage anyway (e.g. "This looks like a very early draft").
- Just a butthurt author's reaction (e.g. " I have read this paper several times through, and I have nothing to say in its defense.").
I somehow had assumed this was about Apple App store reviewers. At least ours always had hilarious comments and requirements, so maybe it's not too late to hope for some place to share these stories :)
I've never seen reviews this ugly in my engineering field, nor written one. Although I've certainly felt tempted to tell people that their work "fills a valuable hole in the literature," I try to keep it constructive. For all that, I've certainly seen reviews that were needlessly pedantic or refused to accept some entirely reasonable premises. Not to mention the timeless classic, "Your review of the literature has overlooked Jones and Smith, 2008, Smith, 2010, and Jones, Smith, Ramakrishnan and Zhang, 2014. Please incorporate citations to these relevant studies in your revision." Gee. Thanks, Professor Smith.
Maybe I shouldn't admit it but I read quite a few of them thinking these were reviews of one of his papers. I couldn't believe he was showing these. I really wanted to read that paper. The thing that threw me off was the "My Reviewers Said".
reply