Skip to content

Losing traction from retractions

April 27, 2014

Some colleagues of mine were surprised when I told them about two retractions from the Journal of Neuroscience in the past 6 months. Out of curiosity, I looked for retractions in other neuroscience journals. I was surprised to find four others for 2013 – 2014, though they are from journals of lesser impact than J Neurosci. A list of all six retractions appears at the end of this post, most of which were found using retractionwatch.com. (For those not familiar with scientific publishing, see this Wikipedia article on retractions.) First, I’ll remind you of two famous stories of retractions from Science magazine, one of the highest profile journals in (of course) science. First there is former stem-cell star Hwang Woo-suk who had retractions for two Science articles in 2004 – 2005 as part of his international demise. More recently, there is psychological shocker Diederik Stapel whose most famous retraction was from 2011. These cases remind us of how bad it can get.

I won’t comment on the issue of fraud, though Kresamir Josic has a nice commentary on the temptation of high impact journals and how it may lead to fraud. There are different causes and ramifications for retractions. Blatant fraud is certainly the most alarming and destructive. I mainly want to point out the inherent danger of more innocent mistakes that may or may not be caught after an article is published.

In my search for neuroscience retractions, the cases I found are mostly due to methodological errors in the analyses. However, it is important to realize that a bad experiment creates bad (but very real) data. This is very relevant in computational neuroscience where we manufacture our own data. The famous cases of Hwang Woo-suk and Diederik Stapel involved fabricating data that didn’t exist. More commonly, scientists can make the innocent mistake of incorrectly designing/performing an experiment. In computational neuroscience, the experiment is often a computer simulation.

There are basically two ways in which a researcher discovers a mistake in an experiment. The most common way is by getting unexpected results, either good or bad. If the results seem too good to be true, a good researcher will find out why, and a poor researcher will hopefully get stopped by someone else such as a reviewer, PI, etc. If the results are disappointing, one actually hopes it WAS a mistake in the experiment!

A second way a researcher may discover a mistake is by explaining the details of the methods in the article. All authors have had the experience of not fully understanding their own methods until they had to write everything down in detail. The process of explaining can reveal issues to the author or someone else such as a reviewer, PI, etc. The issue of poor reviewers is a different topic by itself, and I’m not going there! However documenting methods is where computational neuroscience could better police itself… but does not.

The problem is when methods are incorrectly or insufficiently explained. Just saying that you used standard method X, or citing an algorithm paper, does not mean you did it correctly. I said that computational neuroscience could police itself better because, unlike wet lab experiments, simulations should be perfectly reproducible in every detail. There has long been a movement in computational neuroscience to make models publicly available, but that practice is far from being the norm. Likewise, source code for analyses is rarely provided in any scientific field.

I have hope that things will change. PLoS is revolutionizing the publishing world in many ways, one of which is an open access data policy. That is clearly the best place to start (with the data), and perhaps some day the norm will be to provide analysis source code with it.

Finally, below is my list of neuroscience retractions during 2013 – 2014, mostly thanks to retractionwatch.com. You can look all of these up on retractionwatch.com where you may find more details as well as retractions in other fields besides neuroscience. Notice that in most cases, the problem is claimed to be an analysis error.

#1. The Journal of Neuroscience, December 11, 2013
Publication date: December 7, 2011 (2 years earlier)
Reason for retraction: The Journal of Neuroscience received a report of an investigation… that describes substantial data misrepresentation.
Article: Zeng et al., Epigenetic Enhancement of BDNF Signaling Rescues Synaptic Plasticity in Aging

#2. The Journal of Neuroscience, April 2014
Publication date: May 2013 (1 year earlier)
Reason for retraction: The authors report, “…we discovered errors in the quantification of the expression and/or phosphorylation of a subset of signaling pathways…. Despite these errors, the major conclusions of the paper remain substantiated.”
Article: Li et al., Elevation of Brain Magnesium Prevents and Reverses Cognitive Deficits and Synaptic Loss in Alzheimer’s Disease Mouse Model

#3. Cerebral Cortex, August 2013
Publication date: November 2012 (9 months earlier)
Reason for retraction: fMRI data… were not analyzed properly.
Article: Braet et al., The Emergence of Orthographic Word Representations in the Brain: Evaluating a Neural Shape-Based Framework Using fMRI and the HMAX Model

#4. Brain, Behavior, and Immunity, February 2014
Publication date: August 2013 (6 months earlier)
Reason for retraction: The merge of laboratory results and other survey data used in the paper resulted in an error regarding the identification codes.
Article: Kern et al., Lower CSF interleukin-6 predicts future depression in a population-based sample of older women followed for 17 years

#5. Glia, March 2014
Publication date: December 2013 (3 months earlier)
Reason for retraction: Some of the results in Figures 1C, 4A, 4C, 5A, 5C and 7A-D were incorrect and, therefore, misleading.
Article: Morga et al., Jagged1 regulates the activation of astrocytes via modulation of NFkB and JAK/STAT/SOCS pathways

#6. Frontiers in Human Neuroscience, December 2013
Publication date: June 2013 (6 months earlier)
Reason for retraction: Systematic human error in coding the name of the files…. The final result of the paper… is therefore not correct.
Article: Chavan et al., Spontaneous pre-stimulus fluctuations in the activity of right fronto-parietal areas influence inhibitory control performance

 

Advertisements

From → Uncategorized

3 Comments
  1. josic permalink

    This is an interesting comment – there have been a few recent articles about the improper use of statistics that may lead to errors in published research. Many refer to this classic by John Ioannides

    http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

    However, the issue is fairly complex, and you can find many thoughtful replies to Ioannides’ points in various blogs.

  2. Thanks, Kreso. I remember that Carson Chow has commented on that several times (http://sciencehouse.wordpress.com/2013/05/20/most-of-neuroscience-is-wrong/).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: