Skip to content

Simulation book does not compute (Ch 2)

This is a continuation of my review of “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. In my last post, I discussed how chapter one emphasizes that the brain’s approach to computation is unlike that of today’s digital computers. In chapter two, the authors begin to explore the mechanics and the idea of what they call the “relativistic brain theory”. They introduce it this way:

“According to the relativistic brain theory, complex central nervous systems like ours generate, process, and store information through the recursive interaction of a hybrid digital-analog computation engine (HDACE). In the HDACE, the digital component is defined by the spikes produced by neural networks distributed all over the brain, whereas the analog component is represented by the superimposition of time-varying, neuronal electromagnetic fields (NEMFs), generated by the flow of neuronal electrical signals through the multitude of local and distributed loops of white matter that exist in the mammalian brain.”

The reason for using the term “relativistic” does not appear to be explicitly stated anywhere in this or later chapters. I believe that it refers to the mutually dependent (i.e. relative) behavior of individual cells and groups of cells, as opposed to a sequential effect from one hierarchy to another. They state that electrical activity of neurons generates neural electromagnetic fields (NEMFs), which are more commonly known as local field potentials. Ironically, they actually reference a modeling paper on the effects of field potentials: Anastassiou et al. (2010) The Effect of Spatially Inhomogeneous Extracellular Electric Fields on Neurons, J Neurosci, 3 February 2010, 30(5): 1925-1936. However, they make no mention of the actual computer simulation methods in the paper! (Incidentally, there is a later, important review article which they did not mention: Buzsáki, Anastassiou, Costas and Koch (2012) The origin of extracellular fields and currents — EEG, ECoG, LFP and spikes. Nat Rev Neurosci, 13 (6). pp. 407-420.)

Nicolelis and Cicurel emphasize that the brain utilizes a highly distributed approach, as opposed to a centralized approach. In a centralized approach, which might be typical of an engineered solution on a computer, one might expect that individual neurons would strictly identify specific representations such as mental concepts or perceptions. The implication is that a digital computer cannot function in this way. They claim that this distributed approach depends on the NEMF. They state that:

“The relativistic brain theory also provides for a biological mechanism that can generate useful abstractions and generalizations quickly, something that a digital system would expend a lot of time trying to mimic.”

They also point out that:

“NEMFs that account for a given behavioral outcome can be generated by different combinations of neuronal elements, at different moments in time….”

The implication through all of this, with no substantial argument given, is that such mechanisms are not possible using digital computers, particularly Turing machines. They go as far as saying:

“…the relativistic brain theory predicts that key non-computable processes like perception, mental imagery, and memory recall occur on the analog domain, thanks to the emergence of analog computational engines formed by time-varying NEMFs.”

Initially, I was excited about this chapter because there has not been much work on modeling the effects of local field potentials on computation, as compared to all other modeling applications. However, I was disappointed that the primary reasoning in the chapter seems to be that digital computers cannot be used to replicate analog computation mechanisms. The authors do not address the widely used approach of differential equations and numerical integration.

It seems to me that the only real conclusion they can draw is that we don’t yet understand how the brain uses the analog mechanism to compute. In later chapters, they attempt to use mathematical arguments to attack the use of a Turing machine for brain simulation. For now, they merely throw down the gauntlet by stating that “processes like perception, mental imagery, and memory recall” are non-computable.

I am surprised that they would assert that perception and memory are non-computable, as these are such general functions with incredibly long histories in modeling. I was willing to entertain the argument that the brain’s physical mechanisms for such processes are so difficult to understand that modern numerical methods might not be sufficient. It was intriguing to think that they might try to explain the mechanism by which NEMFs are utilized in such functions. By the end, I was still left wondering why a Turing machine is not capable of simulating a mathematical model of an analog mechanism such as NEMFs.

Simulation book does not compute (Ch 1)

I’m reading the recently published book “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. Good news: it only cost me $1.50 for the Kindle edition. Bad news: the arguments for their anti-simulation stance seem as poor as their archrival Ray Kurzweil‘s arguments about his singularity theory. I think most real computational neuroscientists agree that Kurzweil’s singularity isn’t coming anytime soon. However, Nicolelis and Cicurel really go to the opposite extreme, claiming that simulating a “real” brain is absolutely impossible. I was shocked that, right out of the gate, they seem to discount the entire field of computational neuroscience! I doubt that they really think computational neuroscience is useless overall. Without having finished the book, I am still hoping they are just really enthusiastic about the more practical debate of having an exact replication of the human brain.

It’s an enticing read because it has gotten more and more shocking as I keep going! Ready? Here we go… I was surprised from the start with their self-advertising about their brain-machine-interface (BMI) exoskeleton project (which was hyped up for the opening ceremony of the men’s 2014 soccer World Cup). In the field of BMI, that work is certainly important. But what does this have to do with simulating brains on computers? Is it supposed to give the authors credibility? To me, it emphasizes that they haven’t spent their time actually trying to simulate brains. More importantly, it leaves the reader wondering whether they even know anything about the field of computational neuroscience.

The primary goal in Ch 1 seems to be to impress upon the reader that the brain is highly complex, to the point of being unpredictable. For example, they state that “some evidence suggests that the same combination of neurons is never repeated to produce the same movement.” They discuss what they call the “context principle” in which the brain’s actions depend on its own, internal state. There is nothing here that is particularly different from a computer-controlled robot which can adapt to its changing environment. They seem to be implying that the brain’s internal point of view is something we still don’t understand – and that consciousness would have to be explained at the conceptual level (as opposed to just a physical level) in order to be simulated. They go on to emphasize the wonder of plasticity by stating:

But how could a brain formed by such vast networks of intertwined neurons reshape itself so quickly, literally from moment to moment, throughout one’s entire lifetime, to adjust its internal point of view, which it uses to scrutinize any new piece of world information it encounters? That exquisite property, which creates a profound and unassailable chasm between the mammalian brain and any digital computer, defines the plasticity principle….

They provide no clear argument yet about why plasticity is an “unassailable chasm”. Plasticity is quite easy to achieve in simulations.

One other comment on their statement above: Why do they bother saying “mammalian brain”? Does this mean that their arguments will still be safe when C. elegans has been considered to have been successfully simulated? (To read about C. elegans, try this article by Ferris Jabr which I commented on in a previous post.) They have no discussion at all about the incremental progression required for understanding (and simulating) increasing levels of complexity in the nervous systems of different animals. It’s a long road to get from C. elegans to a mammalian brain. No doubt about it. But does that mean it is impossible? In my opinion, their argument in Ch 1 boils down to the age-old fallacy that anything we don’t understand now will never be understood in the future.

The next chapter of the book covers the author’s “relativistic brain theory” and what they call “neural electromagnetic fields”. It actually provides what I think are some concrete ideas on which they might build a plausible argument, if there really is one. I will cover that in a future post.

Unintelligent Arguments Against Intelligent Weapons

After my last post on ethics intelligence, I found out about a public statement against autonomous weapons that has been endorsed by Stephen Hawking, Elon Musk, and Steve Wozniak, among many other notable personalities. Written by the Future of Life Institute, it is titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”.

First I will say that I do NOT disagree with the idea of limiting warfare, and I value the importance of self-limiting our ability to kill. However, I strongly disagree with the reasoning used in the letter, and it strikes me as being more of a self-serving public relations ploy than as a rational attempt to protect human life. The letter has many disturbing statements, so I will address them one by one.

At the beginning, I feel they confusingly equate the term “AI” with the concept of “autonomous”. This is a significant mistake. The letter begins with this:

Autonomous weapons select and engage targets without human intervention.

This has no dependence on AI and does not include the use of AI technology to help a human endorse the killing. However, they seem to imply throughout the letter that they oppose the development of any weapon that incorporates AI. They go on to state:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

It’s not clear to me whether “Kalashnikovs” refers to assault weapons like the AK-47 (the most likely meaning) or the inventor Mikhail Kalashnikov. In either case, it seems ignorant to think that the U.S.A. and other first world countries do not already have the technology that meets their definition of autonomous weapons, regardless of whether such weapons are being used. More importantly, what makes AI-enabled weapons so much more dangerous than a weapon that has auto-firing capabilities? Why aren’t they opposing the use of assault weapons? I think the answer may be later in the letter, so moving on… They next state:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

Again, how is this different than assault weapons that we already have today? I would argue that it is actually easier to make or buy an AK-47 and get a human to use it than an AI-controlled device with the same firepower. Unfortunately, it is humans that are a dime-a-dozen. Certainly first-world countries are the most capable of doing this, but it seems ignorant to say that the potential harm from AI-controlled rifles is anything like the harm of nuclear weapons.

So why are they so concerned about AI-weapons, as opposed to automated killing in general? Perhaps it is explained here:

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

Notice that the term “AI weapons” is used, as opposed to “autonomous weapons”. This is a serious error, in my opinion. There are a few more problems with the above statement. First, it does not clarify the real issues with chemical weapons which are the facts that: (a) they are a form of mass destruction, and (b) they are grossly inhumane (with regard to the physical trauma). Neither of these characteristics has anything to do with “AI weapons”, or even “autonomous weapons” for that matter.

A second problem is how the statement equates the primary physical weapon (gas/chemical vs. bullets) with other technology that is used to enhance the weapon. These are two completely different aspects of a weapon. Thirdly, they seem to think that using AI in a weapon is critically different than using any other technology. Why do you think the IEEE doesn’t oppose using electronics in weapons? Fourthly, what about potential positive uses of AI in weapons? Would they oppose using an AI algorithm to help a soldier properly identify that a person’s face is an innocent civilian instead of the military target? Their argument is incorrect mostly because they are using the term “AI” to mean “autonomous”. Finally, we see that the authors are afraid that AI weapons will bring bad publicity for the AI world. The larger issues of ethics and human welfare are important in the debate they address. I feel the matter of public relations, however, is irrelevant and may reveal one of the significant motivations behind the public statement.

Like my last post, I seem to be straying from the neuroscience. Never fear – it’s here… I still claim, as in my last post, that machines are more likely to be better at making important, complex decisions. But what about brain-controlled interfaces (BCI) for weapons? I would argue that is the real equivalent of the “Kalashnikovs”. It increases the automated killing power of a single human to the next level. Making it easier for humans to kill is the most significant threat, and arguing about AI is ridiculous. This open letter by the Future of Life Institute seems to really miss the target.

Ethical Intelligence

Some recent events (at least 3) have coincided regarding the debate over whether artificial intelligence, especially in robots, can handle the ethical questions. This is nothing new. Isaac Asimov’s 1942 story “Runaround” introduced Asimov’s famous three laws of robotics. The first interesting event is that Runaround was set in the year 2015! I didn’t remember that, but it was pointed out by a new Nature article by Boer Deng entitled “Machine ethics: The robot’s dilemma”. That’s the 2nd recent event. I thought about writing this post when I saw Deng’s article, but I didn’t have much to say until the 3rd recent event which was an article in, of all things, the Costco Connection entitiled “Is AI a good thing?”. They took a poll, by the way, and 36% of respondents answered “no”!

What really got my attention in the Costco Connection was an anti-AI editorial by James Barrat. I can’t figure out if Barrat legitimately believes what he says because his logic is idiotically outdated. For example, he says in the article, “It [AI] has the potential to threaten us with intelligent weapons, take virtually all of our jobs and, ultimately, cause our extinction.” Well, sure. So do all stupid weapons threaten to kill us, all technology threatens to take our jobs, and our centuries-old industrialization threatens to wipe us out. Nothing particularly unique about AI in this regard. I am suspicious that he seeks to lead his own anti-AI bandwagon because the pro-AI bandwagon is too full, and he needs people to buy his books. The pro-AI crowd can probably help him better by citing him as the negative example for sake of argument.

Both sides of the aisle seem to confuse the issue of technology with the separate issue of philosophy. I do agree that teaching/programming ethics into AI will be challenging. However, I disagree with the claim by Barrat-like folks that humans are actually any better at it than machines right now. Where humans clearly agree on right and wrong, the programming is straight forward. The Nature article by Boer Deng mentions a discussion at the Brookings Institution that involved questions such as these:

“What if a vehicle’s efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?”

Humans have been debating this issue long before Google announced its plans to build an autonomous car. It’s known as the Trolley Problem. So it’s not really the logic of the rules that is the problem. Personally, I side with the pro-AI crowd that believes AI is likely to be better than humans in all areas (cars, weapons, etc.) simply because, as others have noted, the machines can be more consistent and transparent.

So what about the neuroscience? I think this blog is supposed to cover that topic, right? There is an interesting aspect to the issue of ethics intelligence, and it involves the parallel physiology of the brain. Traditionally, expert systems, including an “ethics” program, are thought of as if/then hierarchies of logic. However, animals and humans learn to make decisions without such computation. From a mathematical view, there is a probabilistic nature to it. Kresimir Josic has a nice post on Bayesian Inference and how the brain seems to employ this approach. He has also done research on this himself, as described in this post.

So how does the brain do such computation? The biological neural network of the brain is massively parallel, able to encode complex computations. It is ideal for Bayesian inference because it is trained over time through observation, always adjusting to statistical data. The “rules” are there, but they are probabilistic, not logical. I feel that building ethical AI is a useful challenge because it forces us to look more deeply at the physiology and dynamics behind our own ethics. This will only help us understand it better.

Inside Out: Bad News for Neuroscience

I just saw the movie Inside Out. It is receiving praise from psychologists, but it is also being criticized by others regarding the actual neuroscience. Most of the criticism is about the physical portrayal of memory. The movie uses a combination of actual physical constructs mixed with anthropomorphic characters that embody emotional and cognitive functions. There isn’t much criticism about the abstract homonculi that are used to portray the emotions, and that is partly because the approach sidesteps any need to depict the physiology. Psychologist reviews are plentiful, such as this Business Insider interview with psychologist Nathaniel Herr.

Basically, therapists and psychologists find Inside Out to be helpful. Unfortunately neuroscience is not likely to find any help here. Neuroscientists are quick to jump on the inaccuracies in the movie about memory, and there are many. For a short, mild critique, read from neuroscientist Heather McKellar. Or try a more detailed critique by Antonia Peacocke and Jackson Kernion, two PhD students in philosophy, who do a fairly good job of addressing the neuroscience. I’m surprised the movie attempted a highly concrete explanation of memory, and it’s not clear to me why they wanted to try it. The only advantage I see is that they were able to provide some scope to extreme capacity of the brain. Clearly their goal wasn’t to inspire or teach people about memory, however.

I am surprised that there is less discussion out there about the “islands of personality”, which was another physical construct in the movie. These “islands” were unique constructions that combined aspects of memory and function together. Though still somewhat abstract, they are an interesting way to represent complex distributed neural networks. There are some definite failings in the analogy though, such as the fact that they are literally islands and only connect to a central headquarters, and also the way in which an entire “island” can be incapacitated. Still, I liked the idea of trying to visualize the physiology of what makes each person unique.

It’s too bad that the movie isn’t suited to promoting interest in neuroscience. Not only does it get the science wrong, but it even portrays the low level concepts as boring. This was done through a scene in which Joy, the group leader, tells Sadness to study the manuals that explain how the brain works. Joy appears to suggest that it would be fun to read the manuals, but since she is doing it as a form of control and doesn’t have any real interest herself, it is clear to the audience that only a misfit freak would want to learn such things. There is only one apparent attempt in the movie to ascribe any value to the knowledge in the manuals. That is when Sadness gives some hope of being able to help Joy find her way out of the labyrinth of memory banks. However, even that knowledge is depicted with disdain rather than being glorified. Sadness recites a series of left/right turns in a monotone voice, and the plan is abandoned all together when Joy finds another character (Bing Bong) to act as a personal guide instead.

Finally, after the real science is ignored or belittled, my own Sadness homonculus is also crying about how the general public doesn’t even know what part is the “science” and what part isn’t. I saw this article that claims to explain “How Inside Out Nailed The Science Of Kids’ Emotions”. It’s sad that people think the movie is actually about science and that the science in there is actually correct. There is a long history to the argument of whether psychology is a science. To me, the saddest part about Inside Out is that the audience may walk away thinking they finally understand how the brain really works. Clearly one goal of the movie was to help people understand their emotions better, and I applaud the attempt to address this in a way that promotes useful therapy and psychological treatment. However, anyone who might want to understand the real, physical reasons behind joy, anger, sadness, and fear will not find any inspiration in this movie.

Scientific brutality: animals or humans?

There is a new editorial in Nature Neuroscience titled “Inhumane treatment of nonhuman primate researchers”. I have titled this post “Scientific Brutality” because of the parallels to the highly active controversy we have today regarding police brutality. Here in Cleveland, Ohio, there is outrage over multiple cases of alleged police brutality, and the potential for violent protest has induced fear in our communities. One similarity in the battle between animal rights activists and animal researchers is the potential for extremist acts of violence toward researchers. There are many stories of death threats and other extreme acts by activists, including threats to actual patients (not researchers), such as the 2013 story of Caterina Simonsen.

The editorial in Nature Neuroscience discusses a unique case regarding Nikos Logothetis at the Max Planck Institute for Biological Cybernetics in Tubingen, Germany. What is unique is that Logothetis seems to have conceded a victory to activists by declaring that he will cease using non-human primates for research and will transition to using rodents instead. He has not changed his reasoning or ethical position, however. He simply does not want the hassle anymore.

Logothetis’s declaration letter seems to not be public yet, but he did make available a letter that responded to recent accusations about animal cruelty. (The letter is available here, and an article that links to it is available here.) In the rest of this post, I will comment on that letter. Let me first disclose that I have personally killed animals for research. I was required to be educated on humane euthanasia methods as well as my legal requirements to follow committee-approved policies for ethical treatment of the animals. I used rats for the purpose of studying how the brain controls respiration. This involved fully anesthetizing an animal before removing its brain, thus killing the animal. Like all researchers (hopefully), I consider unnecessary pain and distress to be unethical. I suspect Logothetis is opposed to animal cruelty, but I do not think his letter makes this clear.

Now I will explain what provoked the letter. A caregiver in the facility provided video footage to animal rights groups (BUAV and Soko-Tierschutz) who published a report and video. WARNING: it’s a disturbing video. They allege several abuses: severe water depravation, bleeding head implants, infections, restrained monkeys that appear to be extremely distressed, and a case in which a monkey is being violently pulled by a collar around its neck. Logothetis’s letter addresses all but the last incident, and he asserts that the video is intentionally misleading. Regarding water depravation, he states that the level of depravation is neither “distressing” nor “unpleasant”. Regarding the bleeding and infection, he claims that such incidents are very rare and that the infection was an isolated case in which they were required to attempt medical treatment before euthanizing the animal. He does acknowledge that post-operative care could be improved to further reduce such incidents. Regarding the apparently distressed animal, he asserts that the behavior was “almost certainly induced intentionally by the caregiver.” Oddly, he did not address the footage of the violent removal of a monkey from a cage, which I personally found very disturbing.

The primary issue that concerns me about the letter is what seems like an attempt to be philosophically superior rather than just sticking to the facts. The philosophy of what constitutes animal cruelty is a separate topic from that of the facts about laboratory conduct. A lack of separation between these two topics seems to be a source of confusion in many such debates.

For example, the letter suggests that financial support of animal rights is unreasonable when humans are suffering in the world. The letter contains a paragraph that begins with this:

“Donations to organizations such as BUAV or SOKO might sooth the conscience of animal lovers, but are the activities of antivivisectionists appropriate and reasonable in today’s world?”

He then delves into the tragedies of human hunger and poor sanitation in the world, even citing a decision in China to avoid establishing “strict regulations in an animal welfare law” because monkeys might receive better treatment than the humans. As another example of mixing philosophy with facts, he closes his letter with this:

“What society can ignore human suffering to promote the welfare of mice? If the ultimate benefit of patients is not considered a greater good, then we should indeed stop science and research.”

The activists are not suggesting that we should “ignore human suffering to promote the welfare of mice”. Again, I think Logothetis is confusing the debate, as opposed to clearing things up. The editorial I mentioned states,

“We are not trivializing the ethics of animal use in research. In fact, this is an issue of great concern to neuroscientists.”

The editorial also mentions a need to educate the public. Perhaps the “great concern” of neuroscientists is not so obvious to the activists. Clearly, there are plenty of animal rights activists that do not support hateful or violent acts towards researchers. They are not stupid, and they are not philosophically inferior either. It is important to deal with violent extremists, but scientists will need to help others to really see the “great concern”.

Hypothesis-free big science: is it good for you?

Sugar-free food is often marketed as being healthier. What about hypothesis-free research? “Consorting with big science” is the title of a 2014 editorial in Nature Neuroscience that praises the advantages of large-scale research collaborations in the form of consortiums.

It continues the ongoing debate about how to best take advantage of limited research funding in the midst of huge challenges in neuroscience. The usual arguments are proposed, such as the advantage of pooling funds and the inefficiency of smaller, competing laboratories that duplicate the same work instead of complimenting each other. The editorial highlights a concept that is important in viewing large-scale collaborations: hypothesis-free.

Technically, the editorial uses the term “hypothesis-free data”, which it further characterizes as “unbiased”. Data biasing is an important topic itself, but the author is really referring to data that is collected without a specific objective of benefiting one particular laboratory. The piece emphasizes the value of churning out large data sets, but it doesn’t address the risks of generating data for the sake of data. In the latter, progress may be evaluated in terms of quantity, not quality. Krešimir Josić had an interesting post titled “Can science become too big to fail?”. In a later comment regarding Obama’s Human Brain Project, he stated, “And if the goals are not clearly defined, then it really is impossible to fail.”

And what about that data biasing I mentioned? Someone must decide which methods will produce the most useful results. A risk not addressed in the editorial is the incredible waste of a massive project with flawed methods. Armies of scientific soldiers are still soldiers, not a committee, and they will march without much thought. A risk in big-data, or big-science, is that it may seek sudden, epic increases in scale, as opposed to gradual increases in scale that allow for processes to evolve. I have not clearly seen evidence of plans to control the growth.

In my last post, I pointed out the controversies revolving around Henry Markram and his large-scale projects. Part of the controversy, like much of the global debate, concerns the methods and types of data that will be produced. One could say that Markram has a methodological hypothesis, and his detractors don’t agree with it. I feel that the real misconception in the philosophy of big-data is that the scientific world just needs the data – and tons of it. I disagree. What we need are methods that enable as many researchers as possible to collect their own data. Certainly a major effort in big-data is to develop new technologies to acquire the data, but not with the intent to let everyone use those technologies.

As a computational neuroscientist, I am happy to say that government-funded, publicly-accessible supercomputing facilities are recognized as a general tool that is worth major investment. I am not aware of any such government-funded infrastructure for private physical scientists who require expensive techniques. Having experience in an electrophysiology lab, I realize it would be challenging to have some sort of “public laboratory”. However, some neuroscience labs function as data factories already, employing dedicated technicians for every aspect. Smaller labs may be able to outsource work to other labs or use commercial companies, and I admit that I don’t know if this could be improved upon with a public laboratory.

To summarize my points, I feel that the focus of creating big-data is a poor concept. There is no truly “hypothesis-free data”. So let’s acknowledge that the methods really do matter, and that it would be better to focus on creating publicly accessible tools. Then the big data will come anyway, and it is likely to be far more explosive, quantitatively and conceptually, than anything we can conceive of.


Get every new post delivered to your Inbox.