There are many possible ways for a reviewer to write negative, undermining comments without appearing to be too vicious, and therefore retaining a semblance of professorial dignity. One species of undermining comment is along the lines of
This paper is not significant because [insert major results of research] is already well known.
In some cases it is easy to demonstrate that this is certainly not the case by citing recent papers by highly regarded people in highly regarded journal that clearly show that most people do not in fact know this, and in fact the prevailing view may be quite different from what the results of the new research show. The new research may be wrong, in which case the argument for rejection of the hypothesis/paper should be based on the science, not on vague statements about what people may or may not already know. Or, if everyone does know, then cite the papers that demonstrate this.
The fact that in some cases the everyone-already-knows-that criticism can be so easily demonstrated as false makes it all the more amazing that someone would attempt this, without sufficient justification, when recommending rejection of a paper, and all the more disappointing that an editor wasn't able to detect the misinformation. A colleague recently sent me some reviews and a negative editorial decision involving exactly this situation. He wrote a strong and well-documented letter back to the editor, but I don't think he has heard yet whether the editor is willing to seek additional reviews. I have great, albeit possibly misplaced and delusional, faith in the peer-review system, and think that in some cases these situations may be resolved through calm, reasoned communication and determination.
I encountered something similar with one of my own manuscripts in the past year. A reviewer said that another scientist (by chance, a good friend of his) had studied this research topic already and had made all the significant contributions there were to be made, and, if my manuscript were to be revised, that the reviewer's friend, whose work was cited in my manuscript, should be cited "more prominently". In fact, the old buddy of the reviewer missed some rather essential things in his work from a decade or so ago, and my manuscript cited his work respectfully and appropriately (I have no idea how to cite the work "more prominently".. find a way to work a citation into the title of my paper?).
I showed that review to a colleague, and he thinks that, despite my advancing age, I might still be encountering a phenomenon I used to experience much more often and severely of being patronized by a particular group of older scientists in my field. They have had no trouble over the years accepting a younger generation of male scientists into their coterie of respected colleagues, but they have, over the years, occasionally swatted me down with the "We already knew that" remark, even in cases where this is demonstrably false.
In this recent case, the reviewer who said that his old grad school chum had already solved the major issues has put a PhD student on the topic on which I was attempting to publish.
It is very frustrating, but these guys are not likely to change. They are not sexist to the core -- in fact, the aforementioned PhD student is a female student who has been having a good experience with her advisor and who is enjoying her research. That's great. In fact, she wrote to me and said that since our interests overlap, she would like to discuss her research with me and might need to ask me for advice. That's great too, and I will help her if she asks. Maybe I'm a doormat, but I have absolutely no interest in perpetuating the insidious I-already-knew-thatism, even though in this case there is a good chance that I actually already did know that.
14 years ago
30 comments:
In my field a "is already well known" is often the reviewer shorthand for:
"I can't believe I never submitted this paper myself! I sorta had the result myself and kinda of had noticed the idea, even perhaps used indirectly but never occurred to me to put pen to paper and formally test it and/or prove it".
What a timely post! I reviewed a paper recently, it was sound and clearly publishable yet was rejected. The authors appealed and the editor asked me to arbitrate. It was rejected on the basis of the comments from the other reviewer, who said, "everyone knows that already". I don't know whether there was a hidden agenda in that case.
What is most irritating is that sometimes it might feel that we "know" something but it hasn't actually been demonstrated. Papers that show this stuff need to be published in order to push science forward.
So a (female) graduate student has been put on a topic for which all the significant contributions that can be made have already been made. That's lovely.
Sometimes one wonders if our colleagues are actually scientists. Doesn't this involve actually TESTING the hypotheses? Being told that a result is not "surprising" and thus is not valuable is my favorite complaint.
This really resonated as I am currently writing the response to reviewers to go with a revised manuscript. All three reviewers praised the quality of the data. Reviewer 3 pointed out specific experiments that would strengthen the conclusions--bless them! Reviewer 1 commented that the results did not advance our knowledge in a field different than that covered by the journal in question--perhaps true but irrelevant in my mind. In essence, they were stating that because this was the result predicted by our current model (one that has never been tested in this context!), it was not "surprising". Reviewer 2 said the work was "not original". This was the prize winner. None of the hypotheses described in the paper had ever been tested, even remotely. Once again, the person was referring to whether the results fit our current model, not whether that model had ever been tested.
In this case, the Editor was clearly troubled by the first two reviews, and sent it out for a third review--this was the review that actually suggested specific experiments.
We'll see about the outcome (the suggested experiments were good ones, and we learned additional cool things)
Mark P
This is an interesting topic since I'm currently on the other side of the story. I'm reviewing a manuscript where the authors say, "We've developed fancy new data analysis method X and uncovered a new finding using the method." The fancy new method is interesting enough to avoid flat rejection, but the new finding is literally one of the oldest and frequently repeated observations using this technology.
In your case, this might not happen, but, sometimes, "everyone know that" is a legitimate criticism.
If you can back up the criticism with references, that's fine.
I just had this happen to me, exact wording, exact disinformation. If I had been publishing alone I probably would have fought it, but my coauthors wanted to just move on to the next journal. I was instantly accepted at the lower-tier journal. This makes me wish I had fought harder at the higher-impact one...
If you feel you can make a case, it's worth a try because (1) you might succeed, and (2) it's good to let editors know about these things, even if they refuse to reconsider the initial decision.
Got this type of criticism on recent grant review, the claim that my technical aim had "already been done". No citation. Dear reviewers - if it has been done, then it should be easy to cite!
The problem is, it's often difficult to actually demonstrate that something has NOT been done or found before. What do you do? Include your Pubmed search? Quote the intro or discussion of a bigwig who says something hasn't been done?
I've often wondered why authors are not anonymous too, the way reviewers are. In my specialty it would be pointless a lot of the time, but there are a lot of cases, particulary for young researchers who are coming up with new approaches to old problems, where rejection would not be based on who was doing (or not doing) the research.
First submission: The work is novel and interesting but wrong. Rejected.
Second submission: The work is novel and correct, but not interesting. Rejected.
Third submission: the work is interesting and correct, but no longer new. Rejected.
I've often wondered why authors are not anonymous too, the way reviewers are.
In my field, theoretical computer science, there has been some push for this, and some heavy pushback by (thankfully, only some of) the top names in the field. Witness the answer to your question played out in this discussion thread:
http://weblog.fortnow.com/2009/03/you-can-separate-art-from-artist.html
That discussion is about conference reviewing (computer science conferences are refereed and are as important as journal publications if not more), but the pro/con arguments largely apply to journal reviewing as well (to the extent that they apply to conferences).
"Journals--such as most scientific society journals--where the editors are actual working scientists know enough about a field to not allow this kind of pernicious bullshit to occur."
I emphatically disagree with you that this occurs less frequently in journals where the editors are actual working scientists than in the journals where they are professional editors. In fact, I find that in my field it is more likely to occur in journals where one got the *wrong* working scientists. In FSP's scenario, there's a reviewer who belongs to the old boys club, but these reviewers are just as likely to be editors at a society journal, in which case they have even more power to augment the power of their club.
Professional editors have their own problems, but as non-working scientists, they don't have a turf to protect.
I, like everyone, have been on the receiving end of this criticism, and it is extremely frustrating.
That said, I've been on the giving end, as a referee. I always back it up with citations, but it is often not completely clearcut. There are some papers that are clearly a direct repeat of other work (sometimes citing it, sometimes not), and it is easy to say these are not new.
There is another, more difficult class of papers, where the authors do something very similar to work already done. How do you decided if something is dissimilar enough to be considered new? For instance, I'm often asked to referee computer simulations of a material, where the basic design is similar to another publication (or several others), with some parameters changed. One could easily (and in an automated fashion) generated an infinite set of papers by tweaking parameters, re-running and re-writing with minimal changes.
In general, I don't consider these papers new, unless they give a justification for why the new parameters are interesting, better, etc. I imagine sometimes my critique is in error-- there WAS something new in the paper that wasn't clear to me. This perhaps then is a failing of the author-- they have done something new, but similar to previous work; if the author does not clearly state how their paper extends previous work, but assumes that the reader can deduce this, often the point gets lost. So, my advice to those who are rejected in this way: before getting angry, re-read your work and make sure that you not only did something new, but made clear how it was distinct from previous work. If you did state this clearly, then proceed to frustrated indignation-- but check first.
I review more papers than I write (I'm a slow writer), and I have certainly seen papers that rehash methods that were obsolete years ago. These are often from new grad students and postdocs whose advisers haven't kept up with the field.
I do try to provide one or two citations to show that the results and methods are not novel enough to be worth publishing, so that the students can get some education, even if their adviser is incapable. I can't give them a year's course in basic methods of the field in referee comments, though.
Since I referee for several journals, I have seen the same bad paper appear multiple times with no attempt to correct the mistakes in method and conclusions that have been pointed out in previous reviews. I've stopped refereeing for third-tier author-pays journals, as the editors seem perfectly willing to accept bullshit papers that I would fail an undergrad for submitting.
I wish I had seen this post a month ago, when a paper I submitted to a fancy journal was rejected because my experiment was "a rather obvious thing to do," in the words of the editor. Perhaps, but it wasn't so obvious that anyone else had thought to try it in the four years since the phenomenon I was investigating was discovered. My co-author and I thought "rather obvious" was editor-speak for "I could have thought of this, but somehow I didn't." Sort of like what Anonymous 01:48 points out.
It was my first time submitting to an interdisciplinary journal with a one-word title, and I was so intimidated that I just gave up on that journal. I resubmitted to a well-respected specialist journal in my field, and the paper has been accepted. Now I wish I had appealed the editor's decision. I could have documented in great detail why my discovery was important, original, and required some ingenuity to make. I won't fold so easily next time.
In this recent case, the reviewer who said that his old grad school chum had already solved the major issues has put a PhD student on the topic on which I was attempting to publish.
If you have proof, name and shame. Maggots like that reviewer shrivel up in the sunlight.
Seems like 90% of the comments, and the original post, claim to be victims of misjudgement.
High-profile journals, nearly by definition, try to reject all but the most eye-catching papers. Originality, like strength of evidence, importance of the subject, and clarity of presentation (and other attributes) defies objective measurement.
While no doubt injustice exists, I'm positive from extensive first-hand knowledge that far more authors overestimate the originality and importance of their own work than are victims of bad reviewers and editors.
People should take reviewers comments more seriously, and recognize authors generally lack perspective on their own work. I know, I know, each person knows THEY are the exception, and can back it up with a long story of particulars.
When I'm the reviewer I try to phrase it as: "The authors have not made it clear how their finding is an advance beyond the findings of X, Y, and Z." (I'm in a field where careful lit reviews are expected, so I can usually just point to the authors' own lit review!) I find this leaves the door open for me being wrong without embarassment, and puts the onus on the authors to make their case.
On another note, I do think there's a difference between saying "This isn't new" and saying "This isn't sufficiently high-impact for this journal."
I totally agree with PP's comment about failed scientists acting as editors in some journals! If the "gatekeepers" are not working scientists, how can they judge the worth of a paper (i.e. whether to send it to reviewers or not)?
physioprof said: The business model where journal editors--the gatekeepers of the scientific literature--are failed scientists has to end.
Do you have any evidence at all that this "business model" exists anywhere but in your own mind? I have seen more than a few jobs ads for editorial positions at NPG journals over the years. Not once have I seen the phrase "failed scientist" anywhere in the list of qualifications.
Maybe you think they are failed scientists. But maybe they think you are a failed editor.
Your comment seems to perpetuate the much maligned, non-sustainable exponential-growth model of producing-Ph.D.s-only-so-they-can-become-PIs too often carried by the academe.
Posts like this are the reason that your blog is awesome. Just saying.
Andrew, and other commenters point out that judging originality is an important part of a reviewer's job. Also, it is not a simple and straightforward business either. For example, different fields have very different levels of tolerance for similarity of work, i.e. some fields are more specialized than others. There are lots of other complicating factors here.
But, it isn't hard to tell the BS "originality" critique from the real one. For one thing, a real originality concern is backed up by references, discussion or other indications that the referee has reflected on the issue. Often, the referee demurs and points to the quality of the work as a deciding factor. But the BS critique almost always comes in as a brief, sweeping statement. This one sentence critique is offered up as reason enough to reject a piece of work. The reviewer is essentially saying, "take my word for it." And in a number of cases, this crap actually works. The editor does take their word for it.
What CurtF said. PhysioProf, you really should get some help with that PI-centric superiority complex of yours. That kind of bullshit is one of the major reasons why academia is such a shitty place to work. That someone isn't a "working scientist", whatever the fuck that even means, doesn't make them a failed anydamnthing.
By the way, I just lost my job because the grant ran out and I didn't have enough data for a decent resubmission. So I guess I'm not an actual working scientist, and I'm considering (among other options) becoming an editor, in large part because research is full of type-A jackasses who think that anyone who doesn't want to be exactly like them is a waste of protoplasm.
Does that make me some kind of loser in your eyes? If so, fuck you very much.
i know a professor at my school that sits around and likes to say everything is a solved problem (except what he and his students do). if he can possibly conceive the problem in his mind (it must be a glorious place) then it is solved.
he is however a jackass without much funding and who will not have any more grad students in the foreseeable future (i've been badmouthing him left and right, for this and many other reasons).
Wow, really interesting discussion, especially since the main criticism I've gotten is that my work is "too novel".
In fact, I actually had a senior PI tell me the other day that my work is "just too far ahead of the curve".
What do you do with that kind of feedback? Nod and smile?
In my field we don't see a lot of "we already knew that" papers, because there is so much we don't know. In fact, it makes me sad to think that either
a) nobody knows the literature or pays attention to the difference between
"model accepted as if proven"
and
"actual result demonstrated by X experiment in Y paper by Z authors."
or
b) too many people who aren't questioning assumptions at all, and are only doing incremental work?
On the other hand, I've noticed that some otherwise talented people are bad at distinguishing- let alone emphasizing- what is novel about their work. They're afraid to lay it out as simply as:
"Joe Blow did X.1 in 19xx and got Y.
We did X.2 in 2009 and got Z."
It's the whole lack of scientific discourse thing, I think. Sometimes an incremental adjustment in approach leads to a HUGE breakthrough in results. But everyone has gotten clobbered too many times, now no one wants to stick their neck out?
And I tend to agree with CPP on this one- I've run into a few too many former scientists as editors.
They are SO out of touch with the internal politics of a field, it's hard to imagine how they could possibly choose reviewers who don't have a conflict of interest, or to arbitrate arguments that require a deep knowledge of the literature (and in some cases, the unpublished results) of a field...
I've gotten reviews that were #$%^!?, but the editor was oblivious to the hallmarks of Conflict of Interest. Is it because they don't understand the nuances of the field well enough? What can we do but go to other journals?
Bill,
That's a rough position to be in, and so I wish you the best of luck to you in your future career endeavors, whatever they may be.
I should point out that there may be many good arguments why scientists who are currently practicing and engaged researchers may make better editors. I suspect physioprof has the intellectual ability to make them, we will have to wait and see I guess.
To briefly take the thread in a different direction, how do you tell if somone is a "working scientist"?
Two quick examples from a field that is not my own, but that I read a lot in as a hobby:
1. Gert Korthof says he "worked [...] on software development for test procedures of commercial pharmaceuticals" and is "now retired and studying evolution literature fulltime." But his web site is an amazing comprehensive resource for anyone interested in the current big-picture state of evolutionary science.
2. Günter_Wächtershäuser. This guy is a *patent lawyer* who has published more than a few papers on the origin of life. Sure, now that he is indispusputably successful he has a few adjunct and honorary professorships. But back in 1991 was he a "working scientist" or a patent lawyer?
Margaret L. has a great response for the problem of potential lack of novelty - clear and constructive. As reviewers, we should always support major criticisms with explicit facts and citations. I have great respect for many editors I've interacted with, but do agree in general that they ought to sack up and ask jackass reviewers to support comments like "we already know this" with citations. Similarly, reviewers should be expected to provide a rationale for additional experiments proposed other than "the authors should do this". EMBO J is adopting an interesting new policy for publishing the (anonymous) reviews online for any paper they accept - way to go, EMBO! I'm thinking people will be more careful about what they write when everyone eventually gets to see their review, even if "anonymous".
Ok, I wrote back to the editor. I was inspired. Email pasted below edited for pseudonymity.
Dear Prof. Editor,
I apologize for not writing sooner, but I was considering whether to respond to this rejection. I have decided that, while I am not resubmitting this paper for your consideration as it was accepted within two weeks at another publication (without revision), I should voice one concern about my review. Both of the reviews suggested that my work was not original, and I think it is important to note that neither review cited a single article to support this claim. The reality is that papers of this kind that examine [stuff] are incredibly rare -- there have been less than ten that I have found in the last twenty five years. Absent from the literature is a single comparative paper on [stuff]. So I wonder what "not original" is really shorthand for.
I respect your decision, but I do think it is important that in the review process submissions not be waved off because the results are preliminary, particularly if the evidence is of a kind that is [stuff]. I am guessing that this is the reason for the rejection based on the rest of the reviews, but again, as the focus seemed to be on this uncited lack of originality (and importance, which is also difficult to quantify in any productive, constructive way for the manuscript author), I am not sure.
I enjoy your journal and do hope I get the chance to submit another manuscript for publication in the future. I have been a reviewer for your journal for a few years.
All the best,
Prof. Kate
Nice letter! You may get back a generic "unfortunately we cannot publish every paper and must turn down many of high quality etc." response, but at least you have called to the editor's attention the issue of this type of insidious unsubstantiated review comment.
Post a Comment