There are many different ways that one can be involved in the editorial aspects of scholarly journals, and one's experience with the editorial universe can also vary considerably depending on whether a journal is published by a professional society or by a for-profit company.
Some variations on the wonderful world of editing a journal include:
1. being on an editorial board that essentially serves as a pool of reviewers (i.e., if you agree to be on the editorial board, you agree to review most manuscripts that are sent to you);
2. being on an editorial board that exists only as a list of names on a journal to show.. actually, I don't know what it shows. When I agreed to be on the EB for one particular journal, I assumed that I would actually do more than just have my name listed, but so far I haven't done anything. Does my Distinguished Name add luster to this journal, inspiring more people to read it? No way. The only people who would read this journal already have an intense interest in this very specific aspect of Science. Perhaps the in-name-only EB is like having a list of "fans" or "friends" of the journal.
3. being an associate editor for journals that have an extra editorial layer between the reviewers and the editor(s). I have been an AE at various times in the past. This type of editorial organization can be a bit inefficient, but this type of structure makes sense for some journals that have a somewhat broad scope. The editors need a pool of experts who are better qualified to select reviewers and provide their own comments on the manuscript and the reviews, but the editor or editors make the final decisions.
4. being the editor of a journal. Variations on this include: (a) being the one and only chief editor of a journal, and (b) being one of 2 or more other editors who are autonomous in their decision-making. And of course these editors can either be part of a system that has associate editors or an active editorial board, or may themselves be directly involved in selecting reviewers.
I have been or currently am involved in one or more of each of the above except 4a.
Somewhere in the blogosphere recently -- I am sorry I don't remember the blog, as I was roaming somewhat indiscriminately at the time -- I read someone's opinion that editors should be eliminated so that reviewers and authors could communicate more directly. I don't remember if the person holding this opinion has ever been an editor, but I'm inclined to think not.
As anyone who has received reviews for their own manuscripts likely knows, reviews vary a lot in terms of constructiveness, thoroughness, usefulness, and politeness. Part of an editor's job is to even out some of the roughness of the reviews, to provide more substance if the reviews lack it (in some cases by finding additional reviewers), to guide authors regarding revisions, and to reconcile disparate reviews or choose one review over another as being more instructive. In rare cases, editors rescind reviews or reviewer comments that are offensive. We keep the system running despite the vagaries of the reviewers (and authors).
Being an editor of a journal that covers a broad range of topics is difficult because you have to rely so much on the expertise of others, but it is also difficult editing a more specialized journal in which you know many of the authors and reviewers. In that situation, your own professional interactions with specific individuals are at stake and may be influenced by your editorial work.
In my work as an editor, I have found it essential to have a good online editing system that keeps track of reviewer data such as: time since last review for this journal (so I don't overload anyone with too many reviews), average time it takes a reviewer to submit a review, and information about the reviewing habits of an individual (e.g., do they routinely return thorough and useful reviews or are their reviews shallow and useless?). If someone is known to me in advance to be a not-so-great reviewer but is nevertheless someone who can provide an important point of view, I might try to get an additional reviewer for the manuscript in question.
I try to keep the system moving as rapidly as possibly, but of course I am somewhat at the mercy of the reviewers for this. Even so, when reviewing time becomes too prolonged, I have several contingency plans: (1) I have a small group of willing "emergency reviewers" who might be able to provide a rapid review; I try not to use this group if at all possible, but they have saved the day on a number of occasions; (2) I act as a reviewer myself, if I feel qualified to comment on the manuscript's topic; or (3) If I have one thorough review in hand, I will just use that and add my own comments (not as a reviewer, but just to make note of anything I think the one reviewer missed). In some cases a delinquent review is submitted in time to be of use during the revision process. So far I have not had any situations in which a late review contained information that would have changed my editorial decision on a manuscript if I had the review earlier, but perhaps I have just been lucky.
It has been important for me to find the right balance in terms of time I spend on editing tasks and the types of editing activities that I do. At the moment, I am quite content with my editing experiences both in terms of types and the time involved. Perhaps in the future I will want to branch out to other editorial experiences, for the challenge or for variety, but right now, things are good the way they are.
I have also served on a search committee that evaluated candidates to be editor of a journal, and this was kind of interesting because each candidate had a different combination of skills. But which was the right combination?
Some of the candidates were excellent scientists and would have added prestige to the journal, but it was clear (in some cases by their own admission) that they did not have the organizational skills or the time to do a good job. Other candidates seemed to have excellent organizational skills and editorial experience, but they were not well respected as scientists. We didn't want someone whose major qualifications were clerical skills. And other candidates had other liabilities (e.g., a candidate who had lots of editorial experience but who had been sanctioned for plagiarism). And what if someone is an excellent scientist and quite organized, but is a polarizing figure, well known for being involved in disputes that at times involved unprofessional behavior? Is that relevant to an individual's qualifications to be an editor? In fact, it probably is quite relevant.
But why be so picky? These people were interested in spending vast amounts of time for no/low pay as a professional service. Shouldn't a journal be happy just to find someone willing to do the "job"?
Fortunately, amazingly, we eventually found a person who was excellent in all respects for this particular position. I guess as long as there are people like that who have the energy, skills, and personality to succeed in the job, the system will continue to function.
I don't know what it's like in other fields, but as an editor of a journal in my field of the physical sciences, I have been impressed with the dedication and care that many reviewers take with their reviews. Therefore, despite the peer reviewing system's flaws -- and colleagues and I recently encountered a rather shocking example of one of these flaws (perhaps more on that some other time) -- my experience has shown that it is mostly a good system that involves a lot of conscientious reviewers and editors who give their time and share their expertise to make it work.
[I haven't decided yet about Monday's topic, but I have a few more things to say on the topic of being an Editor. Also, perhaps we can all share our most disturbing experiences with reviewers and editors next week, to balance out today's mostly-positive view of the peer review system. So save your stories of that sort for next week!]
14 years ago
14 comments:
I have noticed that whoever complains about the peer review system (justified at times) can hardly recommend a better system that can actually work.
Might you want to tell us on Monday a bit about what makes a good reviewer? I never got any training in how to review papers, and there is very little feedback in the process, so it is hard to know if the editor thinks I am doing a good job. For example, can a review be too long? When a paper is really dismal, do I really have to itemise every flaw?
As always, FSP, thanks for a timely blog! I have a few questions:
1. I realize that this will probably vary greatly from field to field and journal to journal, but how are editors generally compensated (if at all)?
2. It seems, at least in my field, journal editors maintain active research groups (so Editor becomes one of many hats an individual might wear). Is this true in a broad sense, or is it more common that editors are 'full time' editors (so wearing ONLY the Editor hat)? Or maybe I should ask, what is the spread (for example, 75:25, 50:50, 25:75)?
3. What is the function of a 'Managing Editor?'
The only alternative to the standard peer review system that I have heard of that is appealing is still peer review, but completely anonymous. The authors are anonymous as well as the reviewers. I don't know if it ever happens in the sciences.
Of course, in some small fields, everyone will know what everyone is doing. But in larger fields, I think this might be more fair, especially for anyone not well established in the US scientific mainstream: students, postdocs, researchers abroad, researchers at smaller schools, etc.
"Might you want to tell us on Monday a bit about what makes a good reviewer? I never got any training in how to review papers, and there is very little feedback in the process, so it is hard to know if the editor thinks I am doing a good job. For example, can a review be too long? When a paper is really dismal, do I really have to itemise every flaw?"
Our grad program tries to train grad students to be good paper reviewers. When we do journal club presentations (weekly for most lab groups), the point is as much to train reviewers as it is to learn the content of the paper. This goal is helped by the fact that most of the papers we end up reading turn out to have some major flaws that *should* have been caught by the reviewers.
Yes, bad papers take longer to review than good ones. Marginal papers (ones with a good idea but a couple of major flaws) take even longer, as the goal then is to try to rescue the paper and make it publishable. I have been a reviewer for papers where the reviews and authors answers to the reviewers comments were 3-5x the length of the paper. I have often written reviews myself that were half the length of the paper---providing pointers to tutorial information that the authors should have known, but apparently did not, or explaining why their analysis was flawed and how they might fix it.
The point of peer reviewing is not just to provide a yes/no publication decision (that is the editor's job), but to try to make the published papers the best they can be.
Off topic, but, btw:
Sexism exists not just in academic as you have exposed very well in this blog. It is rampant everywhere else too.
In fact in nature also, as you see by this news article.
The only alternative to the standard peer review system that I have heard of that is appealing is still peer review, but completely anonymous. The authors are anonymous as well as the reviewers. I don't know if it ever happens in the sciences.
This is the norm in statistics. The main US statistics journal, J. Am. Stat. Assoc., does reviews double-blind, for instance.
I've only been reviewing for a couple of years, and I've often worried about whether I'm doing a good job. Are my reviews too superficial, too short, too long, off-track? Recently I found out that one of the the journal websites where I submit reviews will allow me to view the other reviews written on that paper (only after I've submitted my own review). It was such a relief - I found out that I usually picked up on exactly the same problems the other reviewers did, but also that I spotted some things they didn't and they spotted some things I didn't, so it restored my faith in the system.
In my experience, editors usually don't do anything useful beyond the clerical. They don't filter out offensive reviews, they just forward them along (something any email program could do).
Even in the best case scenario where they agree with the authors but not the reviewers, they aren't allowed to make decisions independently without editorial board approval, even assuming they knew enough about the science to do so.
But my field is just interdisciplinary enough to be a major PITA that way. If you have 3 reviewers and none of them can evaluate the entire paper, what are the chances that the editor knows all about it and can make an informed independent decision? Virtually zero.
Personally, I like the ideas of open publishing and open reviews where everyone signs their names, or double-blind review. I don't think these models have been tested enough to rule them out. "peer" review is a joke anyway when you know (as in my field) that your work is being reviewed by random people, your competitors, or their grad students.
FSP, have you considered that you're listed as an editorial board member as a token woman? I noticed recently that many journals in my field have ZERO women on their editorial boards, and wondered why there isn't any pressure on them to be more "diverse"?
I wish there was a way for me to read lots of good and bad reviews in my field, so that I can get an idea of the extremes that exist and hopefully aim for the good end of the spectrum.
Unfortunately I usually only see reviews for my own papers, and of course I think anyone who dislikes my papers is a hopeless jerk. :P
I've been thinking about the quality of my reviews, too. What I usually do is specifically ask the editor to forward me (anonymously) the other reviewer's comments. The reactions I've got to this were always positive and encouraging.
As always, an excellent blog post. In response to the comment from MsPhD - in addition to being a FSP (full), I am an editor at an upper tier society journal. Along with my fellow editors, I do far more than clerical work. I think carefully about appropriate reviewers after reading the paper, read and think about the reviews, and offer my own comments in the final letter when appropriate. I also talk via email or phone with other editors if - as happens in my multi-disciplinary field - there is an aspect of a given paper at which I need additional expertise. My fellow editors follow similar procedures. Although my experience with peer review is not uniformly positive (as I doubt anyone's is), it is certainly difficult for me to envision a better system. I do not plan on holding this position longer than two terms as it a significant time commitment, but I feel that it is an important contribution to my scientific community. In addition, I have significantly broadened my general scientific knowledge by being an editor and that has no doubt benefited my own paper and grant writing endeavors.
Thanks so much for this post, it was the one that filled an insight gap more than almost any other post I can remember.
MsPhD - In my experience, editors at the major interdisciplinary journals (think "Journal of Neuroscience" as an example) gain a remarkable sense of what the issues are across a very large swath of the field. I don't know whether they are chosen as editors because they can do this or whether they gain this over the years of reading lots of reviews, but it's pretty clear that scientists who are editors (the dual-hat people we've been talking about) actually know the issues.
Second, editors are definitely doing much more than just sending reviews back. Editors have to judge the validity and the importance of reviews. Reviews are rarely black and white. Even when they are, I know of several cases where an editor has overridden a review because the reviewer was wrong.
Third, on the double-blind issue. I suspect this might work for a mathematical or theoretical field where truly everything is in the paper. But for most data-rich fields, it is literally impossible to include all of the raw data in the paper. Therefore, it is important to know how much you trust a lab to be reliable. For example, IMO a lab that has never done the technique before should show more detail that they know what they are doing, while a lab that has long established that they know how to do a technique need only say "we did technique X". In most of these fields, reviewers generally know who is reliable about data collection and who needs extra scrutiny. Yes, it's an in-group network, but that doesn't change the validity of the peer review as long as it is possible for new members to break in and as long as old hats don't get free rides.
Post a Comment