Skip to content

Losing traction from retractions

Some colleagues of mine were surprised when I told them about two retractions from the Journal of Neuroscience in the past 6 months. Out of curiosity, I looked for retractions in other neuroscience journals. I was surprised to find four others for 2013 – 2014, though they are from journals of lesser impact than J Neurosci. A list of all six retractions appears at the end of this post, most of which were found using retractionwatch.com. (For those not familiar with scientific publishing, see this Wikipedia article on retractions.) First, I’ll remind you of two famous stories of retractions from Science magazine, one of the highest profile journals in (of course) science. First there is former stem-cell star Hwang Woo-suk who had retractions for two Science articles in 2004 – 2005 as part of his international demise. More recently, there is psychological shocker Diederik Stapel whose most famous retraction was from 2011. These cases remind us of how bad it can get.

I won’t comment on the issue of fraud, though Kresamir Josic has a nice commentary on the temptation of high impact journals and how it may lead to fraud. There are different causes and ramifications for retractions. Blatant fraud is certainly the most alarming and destructive. I mainly want to point out the inherent danger of more innocent mistakes that may or may not be caught after an article is published.

In my search for neuroscience retractions, the cases I found are mostly due to methodological errors in the analyses. However, it is important to realize that a bad experiment creates bad (but very real) data. This is very relevant in computational neuroscience where we manufacture our own data. The famous cases of Hwang Woo-suk and Diederik Stapel involved fabricating data that didn’t exist. More commonly, scientists can make the innocent mistake of incorrectly designing/performing an experiment. In computational neuroscience, the experiment is often a computer simulation.

There are basically two ways in which a researcher discovers a mistake in an experiment. The most common way is by getting unexpected results, either good or bad. If the results seem too good to be true, a good researcher will find out why, and a poor researcher will hopefully get stopped by someone else such as a reviewer, PI, etc. If the results are disappointing, one actually hopes it WAS a mistake in the experiment!

A second way a researcher may discover a mistake is by explaining the details of the methods in the article. All authors have had the experience of not fully understanding their own methods until they had to write everything down in detail. The process of explaining can reveal issues to the author or someone else such as a reviewer, PI, etc. The issue of poor reviewers is a different topic by itself, and I’m not going there! However documenting methods is where computational neuroscience could better police itself… but does not.

The problem is when methods are incorrectly or insufficiently explained. Just saying that you used standard method X, or citing an algorithm paper, does not mean you did it correctly. I said that computational neuroscience could police itself better because, unlike wet lab experiments, simulations should be perfectly reproducible in every detail. There has long been a movement in computational neuroscience to make models publicly available, but that practice is far from being the norm. Likewise, source code for analyses is rarely provided in any scientific field.

I have hope that things will change. PLoS is revolutionizing the publishing world in many ways, one of which is an open access data policy. That is clearly the best place to start (with the data), and perhaps some day the norm will be to provide analysis source code with it.

Finally, below is my list of neuroscience retractions during 2013 – 2014, mostly thanks to retractionwatch.com. You can look all of these up on retractionwatch.com where you may find more details as well as retractions in other fields besides neuroscience. Notice that in most cases, the problem is claimed to be an analysis error.

#1. The Journal of Neuroscience, December 11, 2013
Publication date: December 7, 2011 (2 years earlier)
Reason for retraction: The Journal of Neuroscience received a report of an investigation… that describes substantial data misrepresentation.
Article: Zeng et al., Epigenetic Enhancement of BDNF Signaling Rescues Synaptic Plasticity in Aging

#2. The Journal of Neuroscience, April 2014
Publication date: May 2013 (1 year earlier)
Reason for retraction: The authors report, “…we discovered errors in the quantification of the expression and/or phosphorylation of a subset of signaling pathways…. Despite these errors, the major conclusions of the paper remain substantiated.”
Article: Li et al., Elevation of Brain Magnesium Prevents and Reverses Cognitive Deficits and Synaptic Loss in Alzheimer’s Disease Mouse Model

#3. Cerebral Cortex, August 2013
Publication date: November 2012 (9 months earlier)
Reason for retraction: fMRI data… were not analyzed properly.
Article: Braet et al., The Emergence of Orthographic Word Representations in the Brain: Evaluating a Neural Shape-Based Framework Using fMRI and the HMAX Model

#4. Brain, Behavior, and Immunity, February 2014
Publication date: August 2013 (6 months earlier)
Reason for retraction: The merge of laboratory results and other survey data used in the paper resulted in an error regarding the identification codes.
Article: Kern et al., Lower CSF interleukin-6 predicts future depression in a population-based sample of older women followed for 17 years

#5. Glia, March 2014
Publication date: December 2013 (3 months earlier)
Reason for retraction: Some of the results in Figures 1C, 4A, 4C, 5A, 5C and 7A-D were incorrect and, therefore, misleading.
Article: Morga et al., Jagged1 regulates the activation of astrocytes via modulation of NFkB and JAK/STAT/SOCS pathways

#6. Frontiers in Human Neuroscience, December 2013
Publication date: June 2013 (6 months earlier)
Reason for retraction: Systematic human error in coding the name of the files…. The final result of the paper… is therefore not correct.
Article: Chavan et al., Spontaneous pre-stimulus fluctuations in the activity of right fronto-parietal areas influence inhibitory control performance

 

Chaos: If the horseshoe fits

This is a commentary on Chapter 2 of the book Chaos by James Gleick which I discussed in my previous post. The chapter is titled “Revolution” and looks at the 60’s and 70’s when mathematicians reversed their position about unpredicatibility and began resolving a longstanding estrangement between physics and mathematics. Gleick discusses the importance of Steve Smale, a mathematician who developed a method for thinking about chaos that would become known as the “Smale horseshoe”. For an overview on the horseshoe, try either Wikipedia or Scholarpedia. Here I’ll mention some trivia about the horseshoe, followed by two big points that hit me.

First, let me offer some trivia. Smale’s life story is interesting in itself. There’s even a biography available by Steve Batterson. Smale actively protested the USA’s involvement in the Vietnam War and supposedly was controversial enough to lose his NSF funding as a result. His “horseshoe” concept is also a colorful story that took place on the Copacabana beach in Rio de Janeiro (see here or here for his own telling of the story). He commented once that the horseshoe shape itself was suggested by Lee Neuwirth in 1960 after seeing Smale’s less recognizable figures. Another tidbit I like is Smale’s own use of coin flips as an example of chaos (again see here or here). It’s a simple but accessible example that I haven’t seen used to demonstrate sensitivity to initial conditions, and Smale relates it to the horseshoe. More on that later.

Now I’ll describe two aspects of the horseshoe that struck me as I dug deeper. The first is a point emphasized by Gleick concerning Smale’s own personal revolution that mirrored the larger transformation that was occurring within mathematics. Regarding an earlier paper on dynamics, Smale says (see earlier links), “I was delighted with a conjecture in that paper which had as a consequence (in modern terminology) ‘chaos doesn’t exist’!” Gleick mentions that someone wrote to Smale to prove him wrong, citing a system with chaotic properties known as the van der Pol oscillator. Gleick doesn’t tell us who the “somone” was, but Smale explained that Norman Levinson wrote the letter that was to prove so cataclysmic for Smale.

I find this story significant because of my own failed attempt to discover where Smale originally supposed that systems with chaotic properties could not exist. Later I’ll give some details on my failed search. For now, I’ll just say that hours and hours of reading Smale’s papers gave me no clue about his claim. My point is that the future Fields Medalist had to be told he was completely wrong. I find this significant because it seems to be a hallmark of the classic Kuhn paradigm shift

There’s a second major point that has impacted me about Smale’s horseshoe. It’s important to me because I’m still a neophyte in my understanding of chaos theory, and it highlights an aspect of the horseshoe that probably confuses quite a few neophytes like me. The issue is what happens at the ends of the horseshoe. They’re called “caps” in  Wikipedia and “semi-discs” in Scholarpedia. Whatever you call them, their importance is not just unclear in such articles, but it seems to be almost completely ignored. This brings me to my personal hero of dynamical systems, Steven Strogatz, and his book Nonlinear Dynamics and Chaos. In this book, Strogatz actually spares the reader from confronting the Smale horseshoe head-on. Instead he saves it as an exercise for the reason I am about to reveal.

Strogatz first presents a version without the ends (or caps or semi-discs) and describes it as a “pastry map” where everything is stretched and squished and nothing is left out. This uses the same concept as the squished putty in the Wikipedia figure. He later explains that the horseshoe ends actually account for what he calls “transient chaos”, and I have not seen an accessible discussion of this concept anywhere else (yet). In transient chaos, the behavior is still sensitive to initial conditions, but the system eventually escapes the aperiodic behavior. Remember the coin-flip example I cited from Smale? That’s actually a form of transient chaos in that it’s quite unpredictable but eventually settles to an equilibrium. Strogatz uses a rolling die as an example and also points out a regime in the Lorenz equations for transient chaos. This is a major point for me because it resolves one frustration in attempting to understand the Smale horseshoe. It’s equally important because it bridges the gap between standard examples of chaos that oscillate forever and other unpredictable cases like a coin-flip or a rolling die.

This ends the main points I had about Gleick’s chapter on the revolution. For posterity, I will close with some details about my failed search for proof of Smale’s personal revolution. I’ll also give a couple references for his early horseshoe publications. Smale posted his personal bibliography here. I consulted a few colleagues, and one of them believes that Smale made his initial false conjecture here: Morse inequalities for a dynamical system, Bulletin of the AMS, 66 (1960), pp. 43–49. However, in this and other closely-dated publications, I am not able to relate Smale’s theorems about diffeomorphisms and structural stability to what I understand about chaotic systems. As for the horseshoe itself, that same colleague believes it may have first appeared in a 1961 conference paper, but I could not confirm that. The horseshoe transformation, in the form of equations, seems to be described here: Diffeomorphisms with many periodic points, Differential and Combinatorial Topology (A symposium in honor of Marston Morse), Princeton University Press (1965), pp. 63—80. This can also be found in The Collected Papers of Stephen Smale: Volume 2. However, the first graphical depiction of the horseshoe seems to be in this 1963 paper (in Russian no less!): A structurally stable differentiable homeomorphism with an infinite number of periodic points, Report on the Symposium on Non Linear Oscillations, Kiev Mathematics Institute (1963), pp. 365–366. The first English publication with a graphic of the horseshoe seems to be this: Differentiable dynamical systems, Bulletin of the AMS, 73 (1967), pp. 747–817. Along with the classic horseshoe, it also displays Smale’s more complicated geometries that apparently preceded the horseshoe idea he received from Neuwirth.

Chaos: lethal butterflies (bugs) in computers

After talking about chaos in a previous post, I’m finally getting around to reading the 1988 bestseller Chaos by James Gleick. The first chapter is named “The Butterfly Effect”. Before I complain about Gleick’s chapter title (which inspired the title of my post), let me point out some great items that Gleick includes which I didn’t know about.

Gleick can tell a good story, and the Lorenz story seems to be a real life Butterfly Effect (which Gleick ironically doesn’t seem to point out, but anyway…). Many people may know how Lorenz stumbled onto chaos theory by typing the number 0.506 instead of 0.506127. The history prior to that is just as interesting, I think. For example, he started his career as a mathematician, but World War II apparently sucked him into meteorology where he began to study weather forecasting. What are the chances, huh?

The other part of the first chapter that was fun to read was the historical aspect of the computer hardware. As I’ll discuss shortly, Lorenz created a controversy by claiming that mathematics (and particularly computers) are incapable of predicting the weather for longer than was currently possible. Gleick makes an interesting observation that John von Neumann was famous for bringing computers to bear on weather forecasting. So it is interesting to me that Lorenz so quickly discovered a theoretical limitation as well.

I also enjoyed thinking about the actual hardware that Lorenz used. Gleick refers to Lorenz as having used a Royal McBee. That was the company name, and the actual model used by Lorenz apparently was the LGP-30. Among its interesting features were an oscilloscope-based numerical display and a laborious booting procedure involving paper tape! Now, I’m just old enough to have actually used a teletype interface. So it was fun to learn that Lorenz was creating graphs of his output by printing a single character on each row of the streaming teletype output. However, what really makes Lorenz heroic, in my opinion, is that he managed to work through his findings so carefully during a time when, as Gleick points out, numerical error was the first explanation that came to everyone’s mind when confronted with Lorenz’s results.

Finally, I want to vent my frustration about two things. The first is a minor issue: Gleick refers to a mechanical analogy of chaos that he calls the “Lorenzian Waterwheel”. The waterwheel was conceived and developed by Willem Malkus, not Lorenz (search YouTube for “chaos waterwheel” to see plenty). Ironically, Gleick actually talks about Malkus but does not give him credit for the waterwheel!

My second, and biggest, frustration relates to Gleick’s title for the first chapter: “The Butterfly Effect”. My main complaint is that he may be largely responsible for a horrible misuse of the term in popular culture. Maybe you’ve seen the 2004 movie “The Butterfly Effect”, or you remember the chaos expert named Ian Malcom in the 1993 movie “Jurassic Park”. The term was made famous by Lorenz’s own 1972 presentation paper titled “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”  (I found a PDF here) Here is Lorenz’s point: long range weather prediction is impossible because we can’t measure the entire globe with enough resolution. I believe Gleick understands the point, but I don’t think he clearly explained why Lorenz used the analogy. The butterfly effect is not an analogy of cause and effect. Instead it’s an analogy of the opposite: how cause and effect may be impossible to determine. Below I have a quote from the original presentation, but first I’ll explain the misunderstanding. The butterfly is only one of countless variables. It has no significance by itself because the entire system of variables is necessary for the outcome to occur. The seemingly monumental implication of cause and effect is just an illusion that results from focusing on a tiny piece of the puzzle (the butterfly) and ignoring the rest (the weather throughout all of Earth). Lorenz is equally famous for finding an example of unpredictability that has just a few variables (literally only 3), but the relationship between those variables is not at all disparate or shocking.

Also frustrating is how the butterfly analogy has become entangled with the concept of sensitivity to initial conditions. Gleick seems to have set a precedent for this by literally declaring that they are the same. Certainly they are related: if the butterfly didn’t flap its wings, the outcome might be different, but that wasn’t the whole of Lorenz’s point. The real intent of the analogy was that a system that large, with that many variables and that broad range of scale, is beyond our practical ability to predict. Maybe Ashton Kutcher actually understood this in his movie “The Butterfly Effect”! After reading Gleick, I discovered that Peter Dizikes wrote a commentary in 2008 for the Boston Globe titled “The meaning of the butterfly: Why pop culture loves the ‘butterfly effect,’ and gets it totally wrong”. Apparently I’m not the only one who finds the term “butterfly effect” to be misused. It’s not unlike how the term “chaos” is also misused in popular culture.

So that you can judge for yourself, here is an excerpt from Lorenz’s 1972 presentation: “Here generally, I am proposing that over the years minuscule disturbances neither increase nor decrease the frequency of occurrence of various weather events such as tornados; the most that they may do is to modify the sequence in which these events occur. The question which really interests us is whether they can do even this–whether, for example, two particular weather situations differing by as little as the immediate influence of a single butterfly will generally after sufficient time evolve into two situations differing by as much as the presence of a tornado. In more technical language, is the behavior of the atmosphere unstable with respect to perturbations of small amplitude? …Since we do not know exactly how many butterflies there are, nor where they are all located, let alone which ones are flapping their wings at any instant, we cannot, if the answer to our question is affirmative, accurately predict the occurrence of tornados at a sufficiently distant future time. Here significantly, our general failure to detect systems even as large as thunderstorms when they slip between weather stations may impair our ability to predict the general weather pattern even in the near future.”

A Marr-ed view of connectomes

I saw an interesting post by Kresimir Josic on whole-brain simulations, and it included a link to a video of a debate between Sebastian Seung and Anthony Movshon about the importance of connectomes. The video captured my attention because of two mini-debates rolled into the big one. The first was a mini-debate about David Marr (the reason for the pun in the title of this post), and the second was about the idea of consolidating major funding to a task like building a human connectome. For background on the pro-connectome side, there is a TED talk explaining Seung’s passion. I highly recommend it simply for the amazing animations. I’m not sure what to suggest for an accessible coverage of the anti-connectome side except for this article by Ferris Jabr. At the end of this post, I’ll mention my favorite discovery in the debate: a 3-D worm!

The mini-debate that struck me most was when Seung complained that the ghost of David Marr still haunts the halls of MIT (it’s around 56:50 in the video). Seung made a striking suggestion that Marr, if he were alive today, wouldn’t make the same claims. I presume he was referring to Marr’s “three levels” and the separability of an algorithm from the hardware with which it is implemented. Many neuroscientists feel this separation is not helpful in understanding the brain. It seems that a Marr-ed view (pun intended!) of the connectome is a central part of the debate. Marr left us tragically some time ago, but Movshon suggested that Matteo Carandini is still carrying the torch, as explained in his recent paper From circuits to behavior: a bridge too far?. What’s important in the mini-debate over Marr is the relative importance of the connectome in understanding the algorithms behind various brain functions. And that brings us to the second mini-debate I mentioned.

The second mini-debate that interested me concerns whether funding should be directed toward the human connectome at the expense of other pursuits. The article by Ferris Jabr describes a fear among some people of creating a “Manhattan Project” for the human connectome but ending up with little to show for it. I was surprised that Seung did not seem to defend against Movshon’s attacks on the seemingly anti-climactic completion of the C. elegans connectome. Maybe he thought people should just read the article by Ferris Jabr which is very supportive of the outcomes of C. elegans results. What Seung did try to emphasize was that obtaining the human connectome is part of a long-term vision that does not promise immediate rewards. Movshon conceded that he and many others in the “cottage industry” of neuroscience tend to focus mainly on short-term payoff.

I’m happy to say that I did find an immediate reward by following Seung’s advice in the video to read Jabr’s article. My reward was the discovery of a 3-D interactive webpage of a C. elegans connectome. (Instructions: to see the connectome, pull the slider on the left very far down in order to reveal the neurons and connections. Then spin and zoom for a fun exploration!) I think this is awesome and demonstrates how the connectome contains information that can be highly accessible to a very broad range of people. If Henry Markram gives us public access like this, I would be willing to overlook his grandiose and misleading promises about his own connectome project. Note that Markram is perhaps the most extreme connectomist there is – maybe on the order of a David Marr. However, the information he is seeking will have tremendous potential, even if the impact he promises does not come to be.

Chaos: too cool for it’s own good

The term “chaos” can mean very different things, depending on whether or not you are a mathematician. Personally, I dislike how it has become a highly misunderstood buzzword outside of mathematics. Is the term perhaps too cool for it’s own good?

It was famously used in mathematics in 1975 by Li and Yorke in their paper “Period Three Implies Chaos“. Since then, chaos theory has become a field that has attracted much attention. In fields outside of mathematics, it seems as though many people hope it is the magic answer behind the nasty randomness that afflicts the universe. Fortunately, at a recent conference on complexity in acute illness, I was delighted to hear John Doyle and Sven Zenker both criticize the chaos bandwagon. Sven Zenker even suggested that human physiology probably does not contain any true chaos. While he intended this to be a side point to his main talk, the audience jumped on it immediately, and he had to fight to get the talk back on track.

Ironically, the basic misunderstanding about chaos theory may be one of the main reasons why it is so fascinating. That misunderstanding is a belief that chaos is a form of randomness, when in fact chaos and randomness are fundamentally opposites! I have been impressed with how many people outside of pure mathematics are aware of the importance of sensitivity to initial conditions in defining a chaotic system. However, I don’t think people understand that sensitivity is not in itself such an amazing thing. It is actually quite easy to make a system that is sensitive to initial conditions by simply adding noise. The reason that sensitivity is important is that it is not easy to accomplish without the use of noise.

I think the irony of the misunderstanding extends even further. In many cases, it is relatively easy to explain to a layman the difference between the fundamentals of chaos and randomness. Yet many such laymen are empiricists who ultimately want to understand the systems they observe. To me, the irony lies in the fact that discerning between chaotic and random (or stochastic) systems seems to be significantly harder to explain. As discussed in Lacasa and Toral (2010), this may require attractor reconstruction and quantification of divergence rates.

In the end, I have mixed feelings about the term “chaos”. On one hand, I find chaos theory very fascinating and I’m glad that so many others do too. On the other hand, I don’t think those people are all interested in it for the same reasons I am. I wonder if the term is so misleading that it gives false hope to those who misunderstand it. I am particularly frightened by its frequent use in neuroscience where researchers still struggle to understand the role of noise. Hopefully chaos theory will not one day just be a joke to the rest of the world because they misunderstood it.

The “art” of simulation

A student of mine named Larry Muhlstein has been investigating spike-time reliability with stochastic neuron models. He often looks at heatmap plots of reliability as a function of certain parameters, as was done in Brette and Guigon (2003). Some of these plots end up having aesthetic qualities that we quite enjoy. For example, Larry created the following image while trying to reproduce some results from Brette and Guigon (2003). It is a plot of spike-time reliability as a function of stimulus amplitude (vertical axis) and time (horizontal axis). It is titled “Walking Round”.

By Larry Muhlstein

Clamping down on the “clamp”

In my last post, I discussed the cutting edge experimental technique known as dynamic clamp. Neuronal modelers often encounter the term “clamp” when looking at electrophysiology articles, but it may be difficult to reconcile the term in its various uses. In this post I will try to clarify what “clamp” really means. First, there are only two general electrical procedures you can perform on a neuron: (1) measure the voltage or (2) inject a current. Each of these involves different forms of a clamp. Measuring the membrane voltage (procedure #1) requires a basic technique called “patch clamp” which is described on this webpage. If you read that webpage carefully, hopefully you will realize where the “patch” part of patch clamp comes from. It involves isolating a small area (patch) of a cell membrane.

Now where does the “clamp” come into play? Measuring the voltage is a passive procedure where we simply observe what the cell is doing. Injecting a current (procedure #2 from above) is used to experiment with a cell. There are three different approaches to current injection, and each one is referred to as a different type of clamp.

1. Current clamp: This is simply direct current injection. It is the process of choosing a constant current and injecting it using an amplifier. Using the term “clamp” here can be confusing, but it is a common way to distinguish it from #2 below. It is sometimes explained that the amount of current is what is being clamped (fixed). Note that the experimenter can simultaneously measure the membrane voltage using the same electrode that is used for the current injection. So technically, just observing the cell without injecting current is a form of current clamp where the injected current is zero.

2. Voltage clamp: This is a more sophisticated form of current injection. In the 1940’s, voltage clamp was a new technique that changed everything. A particular ion conductance might be voltage-dependent, so you want to be able to measure that conductance at different voltages to see how they’re related. To measure the conductance at a particular voltage, it’s much like measuring the resistance of a resistor. One way is to inject a current, measure the resulting voltage, and compute the resistance. (Or do the reverse.) The problem with a neuron is that the voltage won’t sit still! Once the voltage changes, it may change one or more of the conductances which will add more current and change the voltage. In order to keep the voltage constant, you can use a differential amplifier to monitor the difference between the desired voltage and the actual voltage. Then you simply inject more or less current to counteract any changes in voltage. This is the reason for the term “voltage clamp”. A good diagram can be found here on the website by Dr. Michael Mann (it’s Fig 3-19).

3. Dynamic clamp: This is still basically current injection like #1 and #2. It’s very much like voltage clamp in that the injected current is continuously adjusted to achieve some result. The difference is that you can do things that are much more complicated than just clamping the voltage at a constant value. My last post explains more about this.

So what is a “patch clamp” again? It is a general term that encompasses all three of the techniques listed above. Hopefully you now have a firm mental clamp on the “clamp”.

Follow

Get every new post delivered to your Inbox.