Simulation book does not compute (Ch 1)
I’m reading the recently published book “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. Good news: it only cost me $1.50 for the Kindle edition. Bad news: the arguments for their anti-simulation stance seem as poor as their archrival Ray Kurzweil‘s arguments about his singularity theory. I think most real computational neuroscientists agree that Kurzweil’s singularity isn’t coming anytime soon. However, Nicolelis and Cicurel really go to the opposite extreme, claiming that simulating a “real” brain is absolutely impossible. I was shocked that, right out of the gate, they seem to discount the entire field of computational neuroscience! I doubt that they really think computational neuroscience is useless overall. Without having finished the book, I am still hoping they are just really enthusiastic about the more practical debate of having an exact replication of the human brain.
It’s an enticing read because it has gotten more and more shocking as I keep going! Ready? Here we go… I was surprised from the start with their self-advertising about their brain-machine-interface (BMI) exoskeleton project (which was hyped up for the opening ceremony of the men’s 2014 soccer World Cup). In the field of BMI, that work is certainly important. But what does this have to do with simulating brains on computers? Is it supposed to give the authors credibility? To me, it emphasizes that they haven’t spent their time actually trying to simulate brains. More importantly, it leaves the reader wondering whether they even know anything about the field of computational neuroscience.
The primary goal in Ch 1 seems to be to impress upon the reader that the brain is highly complex, to the point of being unpredictable. For example, they state that “some evidence suggests that the same combination of neurons is never repeated to produce the same movement.” They discuss what they call the “context principle” in which the brain’s actions depend on its own, internal state. There is nothing here that is particularly different from a computer-controlled robot which can adapt to its changing environment. They seem to be implying that the brain’s internal point of view is something we still don’t understand – and that consciousness would have to be explained at the conceptual level (as opposed to just a physical level) in order to be simulated. They go on to emphasize the wonder of plasticity by stating:
But how could a brain formed by such vast networks of intertwined neurons reshape itself so quickly, literally from moment to moment, throughout one’s entire lifetime, to adjust its internal point of view, which it uses to scrutinize any new piece of world information it encounters? That exquisite property, which creates a profound and unassailable chasm between the mammalian brain and any digital computer, defines the plasticity principle….
They provide no clear argument yet about why plasticity is an “unassailable chasm”. Plasticity is quite easy to achieve in simulations.
One other comment on their statement above: Why do they bother saying “mammalian brain”? Does this mean that their arguments will still be safe when C. elegans has been considered to have been successfully simulated? (To read about C. elegans, try this article by Ferris Jabr which I commented on in a previous post.) They have no discussion at all about the incremental progression required for understanding (and simulating) increasing levels of complexity in the nervous systems of different animals. It’s a long road to get from C. elegans to a mammalian brain. No doubt about it. But does that mean it is impossible? In my opinion, their argument in Ch 1 boils down to the age-old fallacy that anything we don’t understand now will never be understood in the future.
The next chapter of the book covers the author’s “relativistic brain theory” and what they call “neural electromagnetic fields”. It actually provides what I think are some concrete ideas on which they might build a plausible argument, if there really is one. I will cover that in a future post.