Skip to content

Simulation book does not compute (Ch 2)

September 28, 2015

This is a continuation of my review of “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. In my last post, I discussed how chapter one emphasizes that the brain’s approach to computation is unlike that of today’s digital computers. In chapter two, the authors begin to explore the mechanics and the idea of what they call the “relativistic brain theory”. They introduce it this way:

“According to the relativistic brain theory, complex central nervous systems like ours generate, process, and store information through the recursive interaction of a hybrid digital-analog computation engine (HDACE). In the HDACE, the digital component is defined by the spikes produced by neural networks distributed all over the brain, whereas the analog component is represented by the superimposition of time-varying, neuronal electromagnetic fields (NEMFs), generated by the flow of neuronal electrical signals through the multitude of local and distributed loops of white matter that exist in the mammalian brain.”

The reason for using the term “relativistic” does not appear to be explicitly stated anywhere in this or later chapters. I believe that it refers to the mutually dependent (i.e. relative) behavior of individual cells and groups of cells, as opposed to a sequential effect from one hierarchy to another. They state that electrical activity of neurons generates neural electromagnetic fields (NEMFs), which are more commonly known as local field potentials. Ironically, they actually reference a modeling paper on the effects of field potentials: Anastassiou et al. (2010) The Effect of Spatially Inhomogeneous Extracellular Electric Fields on Neurons, J Neurosci, 3 February 2010, 30(5): 1925-1936. However, they make no mention of the actual computer simulation methods in the paper! (Incidentally, there is a later, important review article which they did not mention: Buzsáki, Anastassiou, Costas and Koch (2012) The origin of extracellular fields and currents — EEG, ECoG, LFP and spikes. Nat Rev Neurosci, 13 (6). pp. 407-420.)

Nicolelis and Cicurel emphasize that the brain utilizes a highly distributed approach, as opposed to a centralized approach. In a centralized approach, which might be typical of an engineered solution on a computer, one might expect that individual neurons would strictly identify specific representations such as mental concepts or perceptions. The implication is that a digital computer cannot function in this way. They claim that this distributed approach depends on the NEMF. They state that:

“The relativistic brain theory also provides for a biological mechanism that can generate useful abstractions and generalizations quickly, something that a digital system would expend a lot of time trying to mimic.”

They also point out that:

“NEMFs that account for a given behavioral outcome can be generated by different combinations of neuronal elements, at different moments in time….”

The implication through all of this, with no substantial argument given, is that such mechanisms are not possible using digital computers, particularly Turing machines. They go as far as saying:

“…the relativistic brain theory predicts that key non-computable processes like perception, mental imagery, and memory recall occur on the analog domain, thanks to the emergence of analog computational engines formed by time-varying NEMFs.”

Initially, I was excited about this chapter because there has not been much work on modeling the effects of local field potentials on computation, as compared to all other modeling applications. However, I was disappointed that the primary reasoning in the chapter seems to be that digital computers cannot be used to replicate analog computation mechanisms. The authors do not address the widely used approach of differential equations and numerical integration.

It seems to me that the only real conclusion they can draw is that we don’t yet understand how the brain uses the analog mechanism to compute. In later chapters, they attempt to use mathematical arguments to attack the use of a Turing machine for brain simulation. For now, they merely throw down the gauntlet by stating that “processes like perception, mental imagery, and memory recall” are non-computable.

I am surprised that they would assert that perception and memory are non-computable, as these are such general functions with incredibly long histories in modeling. I was willing to entertain the argument that the brain’s physical mechanisms for such processes are so difficult to understand that modern numerical methods might not be sufficient. It was intriguing to think that they might try to explain the mechanism by which NEMFs are utilized in such functions. By the end, I was still left wondering why a Turing machine is not capable of simulating a mathematical model of an analog mechanism such as NEMFs.

Advertisements

From → Uncategorized

3 Comments
  1. I’m not going to bother to read the book but do they even understand what noncomputable means? There do exist PDE’s that are non-computable but they only have solutions (if you can call them that) in the weak sense. Any differentiability at all renders almost all dynamics computable. There are noncomputable structures like the Mandelbrot set but that is different from dynamics being noncomputable.

  2. Their definition comes in Ch 4, perhaps to build suspense? They say:

    “Computability is specifically related to the Turing Machine’s model of computation since it refers to the possibility or not of translating a mathematical formulation to an effective algorithm. Computability is thus an alpha-numerical construction and not a physical property. Since most mathematical formulations of natural phenomena cannot be reduced to an algorithm, they are defined as non-computable functions. For instance, there is no general procedure that allows a systematic debugging of a digital computer. If one defines a function F which would examine any given program running on a given machine and which would take the value 1 each time it finds a bug and zero otherwise, F would be non-computable. Non-computability here is illustrated by the fact that there is no algorithmic expression of F that can detect in advance any possible future bug that may hamper the work of a computer. Whatever one does, the machine will always exhibit unexpected faulty behaviors that could not be predicted when the computer and the software were manufactured.”

  3. This sounds like they have no idea what they are talking about. Sure, halting problem is noncomputable but almost all mathematical functions are computable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: