Friday, June 18, 2010

Avalanche of Useless Science

In a post last winter, I discussed whether papers that receive few or no citations are worthwhile anyway. I came up with a few reasons why they might be worthwhile, and noted that the correlation between number of citations and the "importance" of a paper may not be so great.

In an essay in The Chronicle of Higher Education this week, several researchers argue that "We Must Stop the Avalanche of Low-Quality Research". In this case, "research" means specifically "scientific research".

How do they assess what is low-quality ("redundant, inconsequential, and outright poor") research? They use the number of citations.

Uncited papers are a problem because "the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed."

There you go! A great reason to turn down a review request from an editor:

Dear Editor,

I am sorry, but I am going to have to decline your request to review this manuscript, which I happen to know in advance will never be cited, ever.


Sincerely,


CitedSciProf

What if a paper is read, but just doesn't happen to be cited? Is that OK? No, it would seem that that is not OK:

"Even if read, many articles that are not cited by anyone would seem to contain little useful information."

Ah, it would seem so, but what if the research that went into that uncited paper involved a graduate student or postdoc who learned things (e.g., facts, concepts, techniques, writing skills) that were valuable to them in predictable or unexpected ways? Is it OK then or is that not considered possible because uncited papers must be useless, by definition? This is not discussed, perhaps because it is impossible to quantify.

The essay authors take a swipe at professors who pass along reviewing responsibilities: "We all know busy professors who ask Ph.D. students to do their reviewing for them."

Actually, I all don't know them. I am sure it happens, but is it necessarily a problem? I know some professors who involve students in reviewing as part of mentoring, but the professor in those cases was closely involved in the review; the student did not do the professor's "reviewing for them". In fact, I've invited students to participate in reviews, not to pass off my responsibility, but to show the student what is involved in doing a review and to get their insights on topics that may be close to their research. It is easy to indicate in comments to an editor that Doctoral Candidate X was involved in a review.

Even so, the authors of the essay blame these professors, and by extension the Ph.D. students who do the reviews, for some of the low-quality research that gets published. The graduate students are not expert reviewers and therefore "Questionable work finds its way more easily through the review process and enters into the domain of knowledge." In fact, in many cases the graduate students, although inexperienced at reviewing, will likely do a very thorough job at the review. I don't think grad student reviewers contribute to the avalanche of low-quality published research.

So I thought the first part of this article was a bit short-sighted and over-dramatic ("The impact strikes at the heart of academe"), but what about the practical suggestions the authors propose for improving the overall culture of academe? These "fixes" include:

1. "..limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing."

I like the kernel of the idea -- that candidates who have published 3-5 excellent papers should not be at a disadvantage relative to those who have published buckets of less significant papers -- but I'm not exactly sure how that would work in real life. What do they mean by "submit"? The CV lists all of a candidate's publications, and the hiring or promotion committees with which I am familiar pick a few of these to read in depth. The application may or may not contain some or all of the candidate's reprints, but it's easy enough to get access to whatever papers we want to read.

I agree that the push to publish a lot is a very real and stressful phenomenon and appreciate the need to discuss solutions to this. Even so, in the searches with which I have been involved, candidates with a few great papers had a distinct advantage over those with many papers that were deemed to be least-publishable units (LPU).

I think the problem of publication quantity vs. quality might be more severe for tenure and promotion than for hiring, but even here I have seen that candidates with fewer total papers but more excellent ones are not at a disadvantage relative to those with 47 LPU.

2. "..make more use of citation and journal "impact factors," from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher's publication record, the publications on a CV might look considerably different than a mere list does."

Oh no.. not that again. The only Science worth doing will be published in Science? That places a lot of faith in the editors and reviewers of these journals and constrains the type of research that is published.

I have absolutely no problem publishing in a disciplinary journal with impact factor of 2-4. These are excellent journals, read by all active researchers in my field. It is bizarre to compare them unfavorably with Nature and Science, as if papers in a journal with an impact factor of 3 are hardly worth reading, much less writing.

3. ".. change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal's Web site."

I'm fine with that. It wouldn't have any major practical effect on people like me who do all journal reading online anyway, but for those individuals and institutions who still pay for print journals, this could help with costs, library resources etc.


Let's assume that these "fixes" really do "fix" some of the problems in academe -- e.g., the pressure to publish early and often -- so what then?

"..our suggested changes would allow academe to revert to its proper focus on quality research and rededicate itself to the sober pursuit of knowledge."

Maybe that's my problem: I enjoy my research too much and forgot what an entirely sober pursuit it should be. I guess the essay authors and I are just not on the same page.

40 comments:

  1. Part of the argument is predicated the estimate that 60% of papers go 4 years without being cited, which cannot be right. For example, only 1 of my 98 papers went 4 years without a citation.

    I'd argue that the striking "fact" is some misinterpretation of abstracts rather than papers, and the situation is nowhere near as dire as described.

    However, I still think the flood of literature is very difficult to attempt to read, and renders us more narrow than we could be if papers were better written on only the most definitive results. Journals should be more harsh in demanding revision and rejecting papers.

    Writing uncited papers for practice does not seem like a good use of researchers' time, reviewers' time nor science resources. I don't buy the argument that uncited papers are often widely read and appreciated. Writing bad papers is poor practice for students. I generally decline to review papers that have little useful content, i.e., are likely to be poorly cited.

    ReplyDelete
  2. Huh. In my field there is almost no correlation between citation rates and scientific value. The bandwagon people get more citations for garbage than the independent people, but then some of the bandwagon stuff is OK.

    ReplyDelete
  3. In my field there are few journals over IF1, none over IF2, and we never publish in either Science or Nature.
    OTOH, our papers get cited for 10+ years (the really good ones 40+). How about if we were to make a fuss about the useless, cluttering research with a half-life of only 3 years, or even much less?

    ReplyDelete
  4. I found this essay to be totally behind the curve in what is really happening. Mostly I hear complaints about over reliance on citations as a score of quality. I would say that a paper with no citations is likely not to be very useful. There will be some hidden gems that no-one noticed out there but it is far more likely that a highly cited paper or even a moderately cited one is useful. People need to think about citations in a statistical way.

    As far as getting grad students to review I have suggested to editors to send the paper to a student instead if I think the student knows more about the topic. I don't pass papers to students to review without getting the editors approval.

    And who reads print journals any more?

    OTOH the author's desire to have less research published could result in more publication bias.

    I wrote a blogpost detailing these points a bit more:

    ReplyDelete
  5. There is at least one uk university science dept which has a policy of ONLY looking at a candidates top 4 papers in hiring and promotions decisions. The committee are not allowed to look at the rest of the publication record. The idea is that this should make things fairer for women because it assess people on the best they can do rather than rewarding long hours of work and lots of little papers. I think it could be interesting if this kind of thing was more common, but it would definitely add to the pressure to get sci/nat papers only.

    ReplyDelete
  6. The graduate students are not expert reviewers and therefore "Questionable work finds its way more easily through the review process and enters into the domain of knowledge." In fact, in many cases the graduate students, although inexperienced at reviewing, will likely do a very thorough job at the review. I don't think grad student reviewers contribute to the avalanche of low-quality published research.

    In my extensive experience, it's the opposite. I involve graduate and post-doctoral trainees in all of the manuscript reviewing I do, and they are invariably much harsher on authors than I would be. I temper their harshness by reminding them to consider whether they might be holding authors to a standard that they would never hold themselves.

    ReplyDelete
  7. iI find the idea of papers with no citations being useless somewhat absurd. I can think of papers that were written many, many years ago that sat there and languished. Once computers started being used widely for scientific research, people started digging into some of these papers and they are now some of the most widely cited in the field.

    Yeah, I imagine there is a lot of fluff out there. But sometimes research can also seem trivial until the right confluence of factors highlights its importance.

    ReplyDelete
  8. I read this article, and I also found that some of the points they discussed were worth talking about, but that the "solutions" they were proposing were almost total bull...
    Thanks for putting it in a more detailed and clear way.

    ReplyDelete
  9. Hear hear. I especially agree with your comments on area-specific journals. People don't realise that an IF of >3 is generally a very good journal in lots of fields. If I publish a paper that attracts four citations I'm doing, on average, quite well. In my experience most people are suprised by this. It seems that unless I'm being read as much as Dan Brown I'm not doing much worthwhile.

    Can I admit to a sin here? Most of my papers and grant applications contain references to Science and Nature even if the papers have been useless to me. Partly the reason is that they are almost always seminal papers that deserve credit. But also, I must admit that it helps sell my sub-field as being interesting. This is self-perpetuating and I hate it, but what can I do? Why would Very Good Journal publish good work on an area that Extremely Good Journal couldn't care less about.

    As to your comments on post-docs and students, I couldn't agree more. Perhaps I should find a way to cite all the papers that have helped me over the years technique-wise so they get their deserved props.

    ReplyDelete
  10. Well, as just one example:

    In graduate school I was involved in developing a new technology that has revolutionized many aspects of my field. Now, before you think I have a big head about this, the role I played was as a tiny cog in a giant sprawling chaotic machine.

    But, I made at least one specific contribution, which was to get fascinated by and intensively go after one particular problem that would have prevented general use of the technology beyond a few niche areas. It was a huge amount of work to tease out some of the subtle physical effect and assemble a full explanation of the problem. I wrote and published a paper about it. It's received a few citations, not many.

    But, every single one of the systems around the world using this technology now incorporates the "fix" I figured out. The end scientific user doesn't need to know about it, but most of them would not be able to use the technology without this "fix". (Without it the application of the technology would have been limited to a very few niche cases.)

    So, I consider this paper reasonably influential within my field. I've had numerous people talk and email with me about the paper and its impacts and I've consulted on several implementations of this "fix" at large well-known institutions.

    But, the paper didn't receive many citations... Which, in the end is just fine by me. I'd rather have it be influential without many citations than vice versa.

    ReplyDelete
  11. This week's Nature has a piece on metrics, and states that 89% of Nature's impact factor comes from 25% of the papers.

    Interesting, no?

    ReplyDelete
  12. I do like the idea of submitting copies of 3-5 papers with job applications. I applied for a postdoc fellowship earlier this year that had that policy in place. It actually worked out really well for me as a graduate student, as I could then send in a great manuscript that we had just submitted, which might have otherwise been overlooked on the CV. It also put me at less of a disadvantage compared to more senior applicants, since the review panel got to see the quality of my research rather than just the short list of papers on my CV. At least, it must have helped, since I got the fellowship!

    ReplyDelete
  13. I agree with your points. Also, the view held by the CHE article authors might be seen as a subtle discrimination. The authors show disdain for the new "international" journals. So research of a non-Western origin is sub-par?

    ReplyDelete
  14. a graduate student or postdoc who learned things (e.g., facts, concepts, techniques, writing skills) that were valuable to them in predictable or unexpected ways?

    I like this thought -- and what about the papers that come next. Doesn't research build on itself? So that a researcher writes one paper, which leads to the next, more cited paper, etc.? I have no clue, other than what I see through my husband's struggle through academia and reading blogs like yours. Thanks!

    ReplyDelete
  15. I think there are many examples of citations not really correlating with usefulness. A lot of very highly cited papers are used to justify the importance of a certain field - often in the introductory paragraph - and didn't actually contribute useful information for the work in question.

    At the other extreme are the negative results papers - ones that show a particular method doesn't work (or works very poorly), a promising theory is definitely incorrect, or a potentialy interesting effect is too small to be important. I think it is important to publish such things, since it could save a lot of time (and money!) if we know what is not worth pursuing further. Yet such papers might get very few citations, precisely because they direct readers to move on to another research area.

    I was recently in this situation, where we obtained very convincing data that a potential physical effect that was generating a lot of buzz, was much too small to be useful for the proposed applications. There was some discussion whether we should really publish such a thing, which would in some ways kill an entire (albeit narrow) research area. But don't we as scientists have a responsibility to distribute the truth of our findings - even if it will not increase our citation count?

    ReplyDelete
  16. As someone whose Ph.d research hinged on a 10 year old finding that no one paid much attention to at the time, I have no problem with people contributing their cog to the literature, however little impact. We should be constantly revisiting little facts and past observations and applying our new knowledge to them. It doesn't hurt any grad student or postdoc to follow ideas on pubmed to see where they lead, however insignificant, since we're (supposedly) TRAINED to evaluate the data. That's why we can look at a poorer paper and determine what information we can get out of it. Studies are seldom perfect, even in the big 3 journals. And we are supposed to be doing innovative research, not all chasing the "hot idea" and publishing on the same 3 proteins.

    On a second note.. isn't anyone else frustrated by papers with 7 figures and then 15 pages of supplementary data? When does it become absurd to keep asking for supplement? Writing shorter papers with vast amounts of online data isn't going to work well with the current promotion standards.

    ReplyDelete
  17. Agree with many of the points you have made in your post. Unfortunately, it appears that almost any paper can get published these days because people know the right people, especially from groups with brand name recognition.
    I would also like to ponder about the extraordinary proliferation of journals over the last couple of years. It seems like there is a new journal every month in some *hot* areas. Perhaps if publishers focused on quality rather than quantity, it could also help.

    ReplyDelete
  18. Bah.

    The lack of subtlety and reflection in that essay is really galling. Almost as galling as the first author in the comments defending his rights as an English professor to identify and fix the problems in a vast field that he evidently knows nothing about. As if we need the outside perspective to set us straight. As if no scientist has ever pondered these problems. Good research and bad research, good lord. The authors need to read up on the history and philosophy of science.

    ReplyDelete
  19. CPP is right my PI lets me participate in reviewing of manuscripts. I compared my comments to my boss and I tore the paper a new asshole. And he is completely right about holding yourself to that standard. His realism tempered my naive idealism, but its a great experience.

    ReplyDelete
  20. At the risk of shingling my posts here:

    While there is variance and unfairness, better papers tend to get more citations - arguing otherwise is very strange.

    Putting on blinders such that only a few of applicants papers are considered is crazy. Maybe they should read only the first 100 words, or read every other word, or not look at the figures (sarcasm).

    A problematic writing motivation is to have pubs to cite for every grant, whether the results are of interest or not. I don't see any way to stem this tendency except more disciplined reviewing than exists.

    ReplyDelete
  21. "I have absolutely no problem publishing in a disciplinary journal with impact factor of 2-4. These are excellent journals, read by all active researchers in my field. It is bizarre to compare them unfavorably with Nature and Science, as if papers in a journal with an impact factor of 3 are hardly worth reading, much less writing."

    Your post addresses a number of important issues. One is whether, though they went to far, the authors you discuss have a point.

    In my field of biomedical research, I think there is a middle ground. I publish most of my work in disciplinary journals, all of them run by scientists, and think that much of the best (and most believable) work in our field is published there.

    However, I do think that there is a floor of both research quality and below which work should not go. This is not work that is published in journals with impact factors of 2-6, but work published in journals that none of my productive colleagues have ever read or thought of using for one of our papers. In fields with clinical relevance, there is a vast body of literature . I just did a quick Pub med search for the tumor suppressor gene on which part of my lab works--2500 publications since 1991. I imagine only about 500 give us significant new insights, either in clinical or basic science. Probably 500 report new mutations in some specific patient population.

    It's a tricky issue, but as NIH has to spend less money on more scientists, its something we need to consider.

    Mark P

    ReplyDelete
  22. Thanks for your post. I read the article in the Chronicle and felt guilty for publishing two papers as a grad student, and felt discouraged because my impression is that the sexiest, not the highest quality, research is often what ends up in Nature and Science.

    ReplyDelete
  23. I have a question to all of you experienced researchers who only seem to be doing 'useful' research. How in the world do you know whether the experiment you did and are submitting for publication will be cited in the next 4 years or later? I am sure if anyone was able to predict that, then we wouldnt spend time preparing that manuscript and dealing with the sarcasm of the reviewers. Here is an event, I was on the committee of a phd student who couldnt name one journal he read, and he has 4 pubs in international journals like Oecologia and ... So did he read the papers or even the abstracts of the papers he has cited? Apparently not because he himself said he just picked them up from another article. So he may have actually only read one previous article adn that too one that came from his advisor's lab. So, for the person that said, the abstracts may not be well written - my take is that most people only read the titles. SO if you cannot say it all in those 15-20 words, then your papers are less likely to be cited. Once it falls between the cracks even lesser chance that it will get picked up later because most people only cite form references of other pubs. I know many of my papers from 2002 go uncited even if it is exactly similar to the one that just came out. I wonder how they even got published because it is really nothing new, but then wait, these are the elites who get reviewed by people they 'know and care'.

    ReplyDelete
  24. The Lesser Half6/18/2010 02:57:00 PM

    Inspiration for a good idea can come from a bad paper, a bad talk, or even a single bad measurement. Haven't you ever seen a paper and thought, "we can do that, but much better", or "that is the wrong application of that technique, but I've got a better one"?

    ReplyDelete
  25. How in the world do you know whether the experiment you did and are submitting for publication will be cited in the next 4 years or later?

    For the really important stuff you know it's important, and those papers get cited well irrespective of where they are published.

    For the medium-hot work, sometimes they get cited, sometimes not so much, and the venue matters. But good work is never completely ignored.

    I second FSP's attitude about publishing in good disciplinary journals with impact factor 2-4. These are usually solid papers that are read by the relevant people and have a staying power. I have some papers in such journals that are cited 10+ times per year.

    ReplyDelete
  26. There is a (moderate) correlation between good papers and number of citations. What the CHE rant misses is that usually it is hard to tell beforehand which papers will end up being moderately cited. To give an example, 15 years ago I published a series of papers on a single topic, all of seemingly comparable quality. Fast forward 15 years and the number of citations for those papers lies between 50+ for the max and 2 for the min.

    I think the same applies to most of the supposed "chaff" out there. They are the price we have to pay to allow the community to publish lots of modest quality papers and let the community select those which turn out to be useful and eventually are cited more often.

    ReplyDelete
  27. Considering several recent papers and blogs on the use of citations including this post indicate just how large the variability (and accuracy) in how other researchers cite papers, citations shouldn't even be used in judging the impact of a paper.

    ReplyDelete
  28. Citations are not a great measure of a paper's importance, but they are a moderately good measure of a paper's impact. It is possible to guess whether a paper will be highly cited or lightly cited. I have citation counts ranging from 0 to over 800, and I could fairly confidently pick out the extremes ahead of time. The ones in between---harder to figure out which ones would get read and for how long they would be popular.

    One of my papers has been continually cited since it came out in 1983, but most papers hit a peak a couple years after publication then stop getting cited.

    ReplyDelete
  29. I am really surprised that no one mentioned that papers that turn out to be dead wrong often have a large number of citations. Reason alone to be cautious about relying solely upon IF. Alas I did not finish reading the original article because I needed to finish my latest contribution to the avalanche.

    ReplyDelete
  30. Well crap. I suppose, according to the article, I should just give up now since my undergrad-driven research is destined for relatively low impact journal. With few citations (although a scientist can dream, right?).

    ReplyDelete
  31. I read in surplus of 100 articles for every 3-months projects does that mean that I have to cite all of them regardless of whether they are relevant or not, just so that fellow scientists could be kept in a job? I actually like both positive and negative result papers. I may not cite all of them, because of citation limits and I only cite the papers that already cite all the papers that's useful to reduce the clutter in the intros.

    Still, the use of citations only to merit a person is a poor method in my view. I guess the top 4 article submitted is the better of two evils, but what do I know I'm just a grad student!

    ReplyDelete
  32. There are so many problems with the hegemony of the impact factor. This article provides a good starting point.

    excerpt:
    "Just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific's impact factor, which is based on hidden data. As more publication and citation data become available to the public through services like PubMed, PubMed Central, and Google Scholar®, we hope that people will begin to develop their own metrics for assessing scientific quality rather than rely on an ill-defined and manifestly unscientific number."

    Also as a librarian I have helped many faculty members seek citation counts for their work and the number of discrepancies that come up for the same author or article within different sections of Web of Science does not inspire confidence. I really think the academic community needs to stop relying on faulty numbers just because they like to have numbers.

    As this report puts it: "The sole reliance on citation data provides at best an incomplete and often shallow understanding
    of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments."

    ReplyDelete
  33. And don't forget useful papers (in the sense that they are read by other scientists, they inspire them) that are not cited!

    I don't know how common this is in general, but I lately noticed this (systematic) omitting to cite related papers in a new field where I am trying to apply my expertise.

    ReplyDelete
  34. "I only cite the papers that already cite all the papers that's useful to reduce the clutter in the intros."

    That's an unscholarly approach. The right thing to do is to trace ideas back to their sources, and cite the originators of the ideas, rather than the review articles and bandwagon jumpers.

    It is ok to cite a review occasionally, as a review, but citing the originators of an idea is a major purpose of citation. If you have to trim citation lists, get rid of the stuff not directly relevant, the review articles, and the "me-too" papers.

    ReplyDelete
  35. It's not an avalanche of useless science, it's an avalanche of science. The original authors recognize this, in passing, before going on to recommend increasing selectivity as the solution. Actually the science being done today, at least in my field, is probably more competent (which is not to say inspired) than ever before.

    As an ex-journal editor I can almost guarantee that trying to be more selective will make it harder for risky, controversial or unpopular research to get out there. Reviewers simply don't have a crystal ball. And who will pick what gets reviewed?

    I wish the solution were this easy.

    Incidentally, I agree with PhysioProf that the younger the reviewer, the harsher they're likely to be. I blame journal clubs & similar activities, where the goal is to rip the paper to shreds rather than to understand the new insight the paper provides.

    ReplyDelete
  36. Lots of attacks here on straw-man argument of only using citations.

    I saw no one propose to ONLY use citations to evaluate papers and scientists. Everyone recognizes citations have pitfalls.

    Plus some posts claim citation counts are not at all helpful.

    Those posters seem to recommend that people judging should read all the papers to learn not only what the papers claim to have discovered, but what was truly new at the times the papers appeared. For all the candidates, to give everyone an equal chance. I sense naïvety.

    ReplyDelete
  37. My advisor recently instructed to read a paper that was, by all accounts, very interesting. She pointed out serious issues with the data analysis, saying that she wouldn't have published the paper without major refinements and probably a boatload more research.

    On the other hand, the new methodology which was set forth in this paper was awesome; I'm about 99.99% sure that my thesis research is going to utilize it.

    So: "quality" research? Maybe. Quality paper? Only sort of. Will it get cites? Yes (from me, if no one else). Was it worth publishing? Heck yeah.

    ReplyDelete
  38. Both number of citations and number of papers play a more important role as a metric in my field than I would like -- as a postdoc, I definitely feel compelled to try to work on things that are likely to get cited, and that aren't so ambitious that I can't finish them relatively quickly. But to a large extent I think citations correlate well with importance.

    Recently I've had a frustrating experience, though; after a few years of trying to convince people working on one hot topic (slowly dying down from hotness) that they were overlooking an important flaw, I finally took the time to write up a careful, well-argued explication of this flaw and publish it. The top people in the field all told me privately that it was a nice paper and they were glad I wrote it up, but it seems on track to be my least-cited paper ever, because most people are ignoring it rather than cite something that highlights a problem in what they're doing. I'm beginning to think that writing this paper while a postdoc was a singularly bad idea, interesting science aside.

    ReplyDelete
  39. Just wanted to say thanks to you for laying out this analysis of the CHE piece, which bothered me to no end.

    ReplyDelete
  40. The graduate students are not expert reviewers and therefore "Questionable work finds its way more easily through the review process and enters into the domain of knowledge."
    This could not be farther from the truth. Graduate student are the most vigorous reviewers, who try hard finding every weakness in the manuscript, checking every citation, etc. I've been there myself. Later,one of my submitted manuscript was slashed to pieces by a student reviewer, who's 3-page (!) review was simply copy/pasted by the professor. If students were always involved involved in the review process, no poor paper would stand a chance.

    ReplyDelete