Pernicious Publish or Perish Pressures Responsible for Rising Retraction Rates?

The image of a scientist as portrayed in pop culture is of a white-cloaked person hunched over a microscope, invested in understanding nature's wonders. But professional scientists, especially those who are part of academia's tenure track, know that science involves more than just conducting research. Many post-docs compete for a select number of tenure track chairs and since the chairs are not guaranteed at first, pressure has been increasing in recent decades. The pressure to excel in a field of risky scientific research with vicious competition for publications and funding has generated the saying "publish or perish".  Many scientists have a lot to lose in the face of a possible tenure track rejection or inability to secure tenure track after several nerve-wracking years. While many may lose sleep worrying, some also take unethical and fraudulent measures to create desired results and boost their publications list.

Dr. Evil or just succumbing to pressure?

Dr. Evil or just succumbing to pressure?

If the scientific community had perceived that fraud is conducted by a mere few desperate scientists, a recent publication in the Proceedings of the National Academy in the USA (PNAS) has torn away that misleading mask to reveal a worrisome reality -  scientific fraud is on the rise. The authors performed an analysis of all retractions indexed by PubMed since 1977 and reached several conclusions:

  • In 16% of the total retraction cases (2,047) the cause was changed from "error" to "falsification/fraud". In some "error" reported cases a deeper investigation was required to uncover the underlying fraud intention.
  • Retractions due to fraud took more time to be implemented and there was a slight correlation with high impact factor journals.
  • Many retraction announcements are non-descriptive, elusive and mostly put forward by the author(s) themselves.
  • Many retractions due to fraud occurred from well-respected and scientific centers such as the United States, Germany and Japan and were in many cases allocated to high impact factor journals. Plagiarism and duplicate publication were found in lower impact factor journals.
  • A certain correlation was found between the journal's impact factor and the number of retracted publications due to fraud/suspected fraud and error. This suggests that the expected value of publishing in prestigious journals perhaps increases motivation to falsify data so it fits and supports a certain agenda or statement making rejection of the paper is less likely.
  • Most troubling is a concluding statement which points out that "...only a fraction of fraudulent articles are retracted." (p. 17032)

The impact of a retracted paper on the scientific community can be tremendous, especially if the field of interest is "hot" with many world-wide laboratories basing their assumptions and scientific work on publications in the field. Retraction Watch, a blog which reports retraction notices, discusses the retractions and follows chronically fraudulent scientists, such as Dr. Naoki Mori, for whom no less than 30 papers (!)  have already been retracted by various journals.

This study, like many others that were published regarding this problematic issue, is setting off an alarm with regards to a troubling trend that is likely to increase in years to come unless the publication scheme or academic excellence evaluation measures are changed accordingly. The system which had been suitable for most of the 20th century might not fit nowadays. In 1973 every second biologist completing a post-doc term had secured a tenure track position in six years, compared to only 1 in six in 2006. (NYTimes.com). Ultimately, the criteria to secure tenure track jobs should not be based only on the number and rank of publications but also on a measure of the research quality, the educational abilities of the researcher, their contribution to the university/institution etc. Even so, one should remember that human nature is diverse and most researchers are honest enough not to be tempted to manipulate their data no matter what the consequences.

What's your take? Should researchers be evaluated by the quality of their research rather than their publication rank and frequency?