Vice President Joe Biden recently gave one of the most intelligent comments by a government official that I have ever heard on the topic of medical research. He appeared on the Late Show on December 6, 2016. I posted a while ago about the controversial idea of “big science”.
Biden and others have popularized the use of the term “moonshot” to inspire what might be done in cancer research if the nation were to consolidate its efforts. Interestingly, he didn’t focus on the consolidation of money, which is often one of the big deficiencies that is cited regarding the way in which scientific research is administered. Surprisingly, he focused on data sharing!
This is equally relevant in neuroscience. On the experimental and clinical side, data sharing is a difficult issue to deal with because of challenges with file formatting, protocols, and patient privacy. On the computational side, the sharing of computer codes can pose challenges in terms of intellectual property rights and the vulnerability of giving away the fruits of intensive years of effort.
Some of Biden’s statements that caught my attention are the following. He introduced the idea by saying:
“… if you’re prepared to share that data, which the culture of medicine isn’t prepared to do yet….”
He was surprisingly forward about what he sees as a problem. He tried to explain an important scientific concept regarding patient individuality by saying:
“A treatment works on one person and not another.”
He even touched on computational technology by discussing how IBM’s Watson is being used to analyze cancer data to “narrow the field exponentially”.
He summarized the issue by saying:
“The biggest thing is changing the culture of sharing the data… not hording it.”
I point all this out because government officials (or politicians, as we usually call them) understandably tend to direct the discussion in terms that connect directly to laypeople. For example, cancer research is perhaps most appreciated by the public in terms of mortality rates and the devastation caused within families. It is easiest to frame the issue as a mystery that simply needs to be solved, and there is little effort spent to help the public understand the underlying challenges. Most laypeople will just accept that science is “hard”, and going further may be seen like a waste of effort on the part of politicians.
Biden’s comments on the medical research culture are fascinating to me, partly because someone in his position is saying them on popular television, and partly because he articulated the issue so well. It was clear to me that he really has tried to understand the issues from the standpoint of someone who just wants to find a solution. I feel the impact on the cancer research community will be positive and will extend to all of scientific research.
A few recent pieces about consciousness appeared recently that I’d like to comment on. A blog post by Carson Chow discusses a New York times opinion piece by Galen Strawsen that scoffs at the age old idea that consciousness is more than just a physical phenomenon. Being a philosopher, Strawsen is satisfied letting the argument about physics remain at the level of the physicist, as opposed to moving into neuroscience. I can accept that, but I don’t know that it’s helpful to those who aren’t philosophers. Since I’m an engineer and a theoretical neuroscientist, I believe neuroscience offers a more satisfying perspective.
That leads to a second, recent publication on this topic which is by Christof Koch, along with Giulio Tononi, both of whom are relatively famous scientists. (Well, in my world, they’re relatively famous.) If you don’t know, Koch declared his obsession with consciousness, along with Francis Crick of DNA fame, in a 2003 article casually named “A framework for consciousness”.
Koch also published a book in 2012 titled Consciousness: Confessions of a Romantic Reductionist. In between, and since then, he has been trying to tackle the issue of consciousness using science, which I personally prefer over the philosophical approach. One reason why I like Koch’s approach is that he is clear about establishing a working definition of the term “consciousness” that can actually be used to move forward, as opposed to spending all of one’s time arguing about vague or abstract definitions that can’t be used in a scientific experiment.
Koch’s 2016 review article gives this initial definition to start with:
Being conscious means that one is having an experience….
Note that we are not talking about sentience here. In his research, his goal is to discover what he calls the “neural correlates of consciousness”, which he defines as:
The minimum neural mechanisms jointly sufficient for any one specific conscious experience.
He points out that the review article focuses on visual and auditory studies, which is important because other sensory experiences may involve different correlates. He offers some references for metacognition, body, tactile and olfactory experiences.
The article criticizes two proposed factors of consciousness: gamma frequency synchrony and a fronto-parietal response called Pb3. I honestly don’t know much about this area, so I can’t speak about the significance of the criticism. It is interesting to me that he basically concludes that the primary marker that still has value, despite its limitations, is desynchronized electroencephalogram (EEG). Interestingly, the article describes REM sleep and dreaming, which have an EEG very similar to that of awake state, as also being a form of consciousness. I had not heard this before.
After hastily reading Koch’s paper, it seems to me the most important argument the authors have is that the higher-level areas of the frontal lobe are not essential components for consciousness. They find that certain “hot spots” of sensory areas are essential, which is not surprising, considering their basic definition of consciousness depends on sensory experience. They point out that sensory integration, including relevant brainstem areas, is also essential. I suppose that the importance of the brain stem is also an argument against those that view consciousness exclusively as a high level function.
Finally, I will point out a third writing of interest (in addition to Chow’s blog post and Koch et al. 2016). Chow’s post criticizes a pathetic essay on https://aeon.co that I won’t bother discussing here. However, I discovered a different essay on the Aeon site that has a respectable relevance to Koch’s scientific pursuit of consciousness. It is a piece titled “Bring them back” by Joseph J Fins, professor of medicine at Weill Cornell Medical College in New York.
Fins discusses the category of a “minimally conscious state” (MCS), which he points out is a “new diagnostic category that came into the medical literature in 2002.” He goes on to discuss the case of Terry Wallis who was considered to be unconscious or vegetative for 19 years before suddenly speaking and interacting. Fins argues that there is actually a spectrum of consciousness, with the locked-in state being at the far end of the spectrum. He also describes that patients may transition back and forth between moments of consciousness and unconsciousness.
It is interesting to me that consciousness is so strongly coupled to our sense of time. A 2006 article by Helen Phillips in New Scientist claims that Wallis thought it was still the same year as when he had the accident. This seems to be the norm for people who wake from comas. However, I am surprised that a partially conscious person would not have some sense of time elapsing. Maybe some day I’ll have time to explore this.
This is a continuation of my review of “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. In my last post, I discussed how chapter one emphasizes that the brain’s approach to computation is unlike that of today’s digital computers. In chapter two, the authors begin to explore the mechanics and the idea of what they call the “relativistic brain theory”. They introduce it this way:
“According to the relativistic brain theory, complex central nervous systems like ours generate, process, and store information through the recursive interaction of a hybrid digital-analog computation engine (HDACE). In the HDACE, the digital component is defined by the spikes produced by neural networks distributed all over the brain, whereas the analog component is represented by the superimposition of time-varying, neuronal electromagnetic fields (NEMFs), generated by the flow of neuronal electrical signals through the multitude of local and distributed loops of white matter that exist in the mammalian brain.”
The reason for using the term “relativistic” does not appear to be explicitly stated anywhere in this or later chapters. I believe that it refers to the mutually dependent (i.e. relative) behavior of individual cells and groups of cells, as opposed to a sequential effect from one hierarchy to another. They state that electrical activity of neurons generates neural electromagnetic fields (NEMFs), which are more commonly known as local field potentials. Ironically, they actually reference a modeling paper on the effects of field potentials: Anastassiou et al. (2010) The Effect of Spatially Inhomogeneous Extracellular Electric Fields on Neurons, J Neurosci, 3 February 2010, 30(5): 1925-1936. However, they make no mention of the actual computer simulation methods in the paper! (Incidentally, there is a later, important review article which they did not mention: Buzsáki, Anastassiou, Costas and Koch (2012) The origin of extracellular fields and currents — EEG, ECoG, LFP and spikes. Nat Rev Neurosci, 13 (6). pp. 407-420.)
Nicolelis and Cicurel emphasize that the brain utilizes a highly distributed approach, as opposed to a centralized approach. In a centralized approach, which might be typical of an engineered solution on a computer, one might expect that individual neurons would strictly identify specific representations such as mental concepts or perceptions. The implication is that a digital computer cannot function in this way. They claim that this distributed approach depends on the NEMF. They state that:
“The relativistic brain theory also provides for a biological mechanism that can generate useful abstractions and generalizations quickly, something that a digital system would expend a lot of time trying to mimic.”
They also point out that:
“NEMFs that account for a given behavioral outcome can be generated by different combinations of neuronal elements, at different moments in time….”
The implication through all of this, with no substantial argument given, is that such mechanisms are not possible using digital computers, particularly Turing machines. They go as far as saying:
“…the relativistic brain theory predicts that key non-computable processes like perception, mental imagery, and memory recall occur on the analog domain, thanks to the emergence of analog computational engines formed by time-varying NEMFs.”
Initially, I was excited about this chapter because there has not been much work on modeling the effects of local field potentials on computation, as compared to all other modeling applications. However, I was disappointed that the primary reasoning in the chapter seems to be that digital computers cannot be used to replicate analog computation mechanisms. The authors do not address the widely used approach of differential equations and numerical integration.
It seems to me that the only real conclusion they can draw is that we don’t yet understand how the brain uses the analog mechanism to compute. In later chapters, they attempt to use mathematical arguments to attack the use of a Turing machine for brain simulation. For now, they merely throw down the gauntlet by stating that “processes like perception, mental imagery, and memory recall” are non-computable.
I am surprised that they would assert that perception and memory are non-computable, as these are such general functions with incredibly long histories in modeling. I was willing to entertain the argument that the brain’s physical mechanisms for such processes are so difficult to understand that modern numerical methods might not be sufficient. It was intriguing to think that they might try to explain the mechanism by which NEMFs are utilized in such functions. By the end, I was still left wondering why a Turing machine is not capable of simulating a mathematical model of an analog mechanism such as NEMFs.
I’m reading the recently published book “The Relativistic Brain: How it works and why it cannot be simulated by a Turing machine” by Miguel Nicolelis and Ronald Cicurel. Good news: it only cost me $1.50 for the Kindle edition. Bad news: the arguments for their anti-simulation stance seem as poor as their archrival Ray Kurzweil‘s arguments about his singularity theory. I think most real computational neuroscientists agree that Kurzweil’s singularity isn’t coming anytime soon. However, Nicolelis and Cicurel really go to the opposite extreme, claiming that simulating a “real” brain is absolutely impossible. I was shocked that, right out of the gate, they seem to discount the entire field of computational neuroscience! I doubt that they really think computational neuroscience is useless overall. Without having finished the book, I am still hoping they are just really enthusiastic about the more practical debate of having an exact replication of the human brain.
It’s an enticing read because it has gotten more and more shocking as I keep going! Ready? Here we go… I was surprised from the start with their self-advertising about their brain-machine-interface (BMI) exoskeleton project (which was hyped up for the opening ceremony of the men’s 2014 soccer World Cup). In the field of BMI, that work is certainly important. But what does this have to do with simulating brains on computers? Is it supposed to give the authors credibility? To me, it emphasizes that they haven’t spent their time actually trying to simulate brains. More importantly, it leaves the reader wondering whether they even know anything about the field of computational neuroscience.
The primary goal in Ch 1 seems to be to impress upon the reader that the brain is highly complex, to the point of being unpredictable. For example, they state that “some evidence suggests that the same combination of neurons is never repeated to produce the same movement.” They discuss what they call the “context principle” in which the brain’s actions depend on its own, internal state. There is nothing here that is particularly different from a computer-controlled robot which can adapt to its changing environment. They seem to be implying that the brain’s internal point of view is something we still don’t understand – and that consciousness would have to be explained at the conceptual level (as opposed to just a physical level) in order to be simulated. They go on to emphasize the wonder of plasticity by stating:
But how could a brain formed by such vast networks of intertwined neurons reshape itself so quickly, literally from moment to moment, throughout one’s entire lifetime, to adjust its internal point of view, which it uses to scrutinize any new piece of world information it encounters? That exquisite property, which creates a profound and unassailable chasm between the mammalian brain and any digital computer, defines the plasticity principle….
They provide no clear argument yet about why plasticity is an “unassailable chasm”. Plasticity is quite easy to achieve in simulations.
One other comment on their statement above: Why do they bother saying “mammalian brain”? Does this mean that their arguments will still be safe when C. elegans has been considered to have been successfully simulated? (To read about C. elegans, try this article by Ferris Jabr which I commented on in a previous post.) They have no discussion at all about the incremental progression required for understanding (and simulating) increasing levels of complexity in the nervous systems of different animals. It’s a long road to get from C. elegans to a mammalian brain. No doubt about it. But does that mean it is impossible? In my opinion, their argument in Ch 1 boils down to the age-old fallacy that anything we don’t understand now will never be understood in the future.
The next chapter of the book covers the author’s “relativistic brain theory” and what they call “neural electromagnetic fields”. It actually provides what I think are some concrete ideas on which they might build a plausible argument, if there really is one. I will cover that in a future post.
After my last post on ethics intelligence, I found out about a public statement against autonomous weapons that has been endorsed by Stephen Hawking, Elon Musk, and Steve Wozniak, among many other notable personalities. Written by the Future of Life Institute, it is titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”.
First I will say that I do NOT disagree with the idea of limiting warfare, and I value the importance of self-limiting our ability to kill. However, I strongly disagree with the reasoning used in the letter, and it strikes me as being more of a self-serving public relations ploy than as a rational attempt to protect human life. The letter has many disturbing statements, so I will address them one by one.
At the beginning, I feel they confusingly equate the term “AI” with the concept of “autonomous”. This is a significant mistake. The letter begins with this:
Autonomous weapons select and engage targets without human intervention.
This has no dependence on AI and does not include the use of AI technology to help a human endorse the killing. However, they seem to imply throughout the letter that they oppose the development of any weapon that incorporates AI. They go on to state:
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.
It’s not clear to me whether “Kalashnikovs” refers to assault weapons like the AK-47 (the most likely meaning) or the inventor Mikhail Kalashnikov. In either case, it seems ignorant to think that the U.S.A. and other first world countries do not already have the technology that meets their definition of autonomous weapons, regardless of whether such weapons are being used. More importantly, what makes AI-enabled weapons so much more dangerous than a weapon that has auto-firing capabilities? Why aren’t they opposing the use of assault weapons? I think the answer may be later in the letter, so moving on… They next state:
Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.
Again, how is this different than assault weapons that we already have today? I would argue that it is actually easier to make or buy an AK-47 and get a human to use it than an AI-controlled device with the same firepower. Unfortunately, it is humans that are a dime-a-dozen. Certainly first-world countries are the most capable of doing this, but it seems ignorant to say that the potential harm from AI-controlled rifles is anything like the harm of nuclear weapons.
So why are they so concerned about AI-weapons, as opposed to automated killing in general? Perhaps it is explained here:
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.
Notice that the term “AI weapons” is used, as opposed to “autonomous weapons”. This is a serious error, in my opinion. There are a few more problems with the above statement. First, it does not clarify the real issues with chemical weapons which are the facts that: (a) they are a form of mass destruction, and (b) they are grossly inhumane (with regard to the physical trauma). Neither of these characteristics has anything to do with “AI weapons”, or even “autonomous weapons” for that matter.
A second problem is how the statement equates the primary physical weapon (gas/chemical vs. bullets) with other technology that is used to enhance the weapon. These are two completely different aspects of a weapon. Thirdly, they seem to think that using AI in a weapon is critically different than using any other technology. Why do you think the IEEE doesn’t oppose using electronics in weapons? Fourthly, what about potential positive uses of AI in weapons? Would they oppose using an AI algorithm to help a soldier properly identify that a person’s face is an innocent civilian instead of the military target? Their argument is incorrect mostly because they are using the term “AI” to mean “autonomous”. Finally, we see that the authors are afraid that AI weapons will bring bad publicity for the AI world. The larger issues of ethics and human welfare are important in the debate they address. I feel the matter of public relations, however, is irrelevant and may reveal one of the significant motivations behind the public statement.
Like my last post, I seem to be straying from the neuroscience. Never fear – it’s here… I still claim, as in my last post, that machines are more likely to be better at making important, complex decisions. But what about brain-controlled interfaces (BCI) for weapons? I would argue that is the real equivalent of the “Kalashnikovs”. It increases the automated killing power of a single human to the next level. Making it easier for humans to kill is the most significant threat, and arguing about AI is ridiculous. This open letter by the Future of Life Institute seems to really miss the target.
Some recent events (at least 3) have coincided regarding the debate over whether artificial intelligence, especially in robots, can handle the ethical questions. This is nothing new. Isaac Asimov’s 1942 story “Runaround” introduced Asimov’s famous three laws of robotics. The first interesting event is that Runaround was set in the year 2015! I didn’t remember that, but it was pointed out by a new Nature article by Boer Deng entitled “Machine ethics: The robot’s dilemma”. That’s the 2nd recent event. I thought about writing this post when I saw Deng’s article, but I didn’t have much to say until the 3rd recent event which was an article in, of all things, the Costco Connection entitiled “Is AI a good thing?”. They took a poll, by the way, and 36% of respondents answered “no”!
What really got my attention in the Costco Connection was an anti-AI editorial by James Barrat. I can’t figure out if Barrat legitimately believes what he says because his logic is idiotically outdated. For example, he says in the article, “It [AI] has the potential to threaten us with intelligent weapons, take virtually all of our jobs and, ultimately, cause our extinction.” Well, sure. So do all stupid weapons threaten to kill us, all technology threatens to take our jobs, and our centuries-old industrialization threatens to wipe us out. Nothing particularly unique about AI in this regard. I am suspicious that he seeks to lead his own anti-AI bandwagon because the pro-AI bandwagon is too full, and he needs people to buy his books. The pro-AI crowd can probably help him better by citing him as the negative example for sake of argument.
Both sides of the aisle seem to confuse the issue of technology with the separate issue of philosophy. I do agree that teaching/programming ethics into AI will be challenging. However, I disagree with the claim by Barrat-like folks that humans are actually any better at it than machines right now. Where humans clearly agree on right and wrong, the programming is straight forward. The Nature article by Boer Deng mentions a discussion at the Brookings Institution that involved questions such as these:
“What if a vehicle’s efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?”
Humans have been debating this issue long before Google announced its plans to build an autonomous car. It’s known as the Trolley Problem. So it’s not really the logic of the rules that is the problem. Personally, I side with the pro-AI crowd that believes AI is likely to be better than humans in all areas (cars, weapons, etc.) simply because, as others have noted, the machines can be more consistent and transparent.
So what about the neuroscience? I think this blog is supposed to cover that topic, right? There is an interesting aspect to the issue of ethics intelligence, and it involves the parallel physiology of the brain. Traditionally, expert systems, including an “ethics” program, are thought of as if/then hierarchies of logic. However, animals and humans learn to make decisions without such computation. From a mathematical view, there is a probabilistic nature to it. Kresimir Josic has a nice post on Bayesian Inference and how the brain seems to employ this approach. He has also done research on this himself, as described in this post.
So how does the brain do such computation? The biological neural network of the brain is massively parallel, able to encode complex computations. It is ideal for Bayesian inference because it is trained over time through observation, always adjusting to statistical data. The “rules” are there, but they are probabilistic, not logical. I feel that building ethical AI is a useful challenge because it forces us to look more deeply at the physiology and dynamics behind our own ethics. This will only help us understand it better.
I just saw the movie Inside Out. It is receiving praise from psychologists, but it is also being criticized by others regarding the actual neuroscience. Most of the criticism is about the physical portrayal of memory. The movie uses a combination of actual physical constructs mixed with anthropomorphic characters that embody emotional and cognitive functions. There isn’t much criticism about the abstract homonculi that are used to portray the emotions, and that is partly because the approach sidesteps any need to depict the physiology. Psychologist reviews are plentiful, such as this Business Insider interview with psychologist Nathaniel Herr.
Basically, therapists and psychologists find Inside Out to be helpful. Unfortunately neuroscience is not likely to find any help here. Neuroscientists are quick to jump on the inaccuracies in the movie about memory, and there are many. For a short, mild critique, read from neuroscientist Heather McKellar. Or try a more detailed critique by Antonia Peacocke and Jackson Kernion, two PhD students in philosophy, who do a fairly good job of addressing the neuroscience. I’m surprised the movie attempted a highly concrete explanation of memory, and it’s not clear to me why they wanted to try it. The only advantage I see is that they were able to provide some scope to extreme capacity of the brain. Clearly their goal wasn’t to inspire or teach people about memory, however.
I am surprised that there is less discussion out there about the “islands of personality”, which was another physical construct in the movie. These “islands” were unique constructions that combined aspects of memory and function together. Though still somewhat abstract, they are an interesting way to represent complex distributed neural networks. There are some definite failings in the analogy though, such as the fact that they are literally islands and only connect to a central headquarters, and also the way in which an entire “island” can be incapacitated. Still, I liked the idea of trying to visualize the physiology of what makes each person unique.
It’s too bad that the movie isn’t suited to promoting interest in neuroscience. Not only does it get the science wrong, but it even portrays the low level concepts as boring. This was done through a scene in which Joy, the group leader, tells Sadness to study the manuals that explain how the brain works. Joy appears to suggest that it would be fun to read the manuals, but since she is doing it as a form of control and doesn’t have any real interest herself, it is clear to the audience that only a misfit freak would want to learn such things. There is only one apparent attempt in the movie to ascribe any value to the knowledge in the manuals. That is when Sadness gives some hope of being able to help Joy find her way out of the labyrinth of memory banks. However, even that knowledge is depicted with disdain rather than being glorified. Sadness recites a series of left/right turns in a monotone voice, and the plan is abandoned all together when Joy finds another character (Bing Bong) to act as a personal guide instead.
Finally, after the real science is ignored or belittled, my own Sadness homonculus is also crying about how the general public doesn’t even know what part is the “science” and what part isn’t. I saw this article that claims to explain “How Inside Out Nailed The Science Of Kids’ Emotions”. It’s sad that people think the movie is actually about science and that the science in there is actually correct. There is a long history to the argument of whether psychology is a science. To me, the saddest part about Inside Out is that the audience may walk away thinking they finally understand how the brain really works. Clearly one goal of the movie was to help people understand their emotions better, and I applaud the attempt to address this in a way that promotes useful therapy and psychological treatment. However, anyone who might want to understand the real, physical reasons behind joy, anger, sadness, and fear will not find any inspiration in this movie.