Skip to content

Stupid Arguments Against Intelligent Weapons

After my last post on ethics intelligence, I found out about a public statement against autonomous weapons that has been endorsed by Stephen Hawking, Elon Musk, and Steve Wozniak, among many other notable personalities. Written by the Future of Life Institute, it is titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”.

First I will say that I do NOT disagree with the idea of limiting warfare, and I value the importance of self-limiting our ability to kill. However, I strongly disagree with the reasoning used in the letter, and it strikes me as being more of a self-serving public relations ploy than as a rational attempt to protect human life. The letter has many disturbing statements, so I will address them one by one.

At the beginning, I feel they confusingly equate the term “AI” with the concept of “autonomous”. This is a significant mistake. The letter begins with this:

Autonomous weapons select and engage targets without human intervention.

This has no dependence on AI and does not include the use of AI technology to help a human endorse the killing. However, they seem to imply throughout the letter that they oppose the development of any weapon that incorporates AI. They go on to state:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

It’s not clear to me whether “Kalashnikovs” refers to assault weapons like the AK-47 (the most likely meaning) or the inventor Mikhail Kalashnikov. In either case, it seems ignorant to think that the U.S.A. and other first world countries do not already have the technology that meets their definition of autonomous weapons, regardless of whether such weapons are being used. More importantly, what makes AI-enabled weapons so much more dangerous than a weapon that has auto-firing capabilities? Why aren’t they opposing the use of assault weapons? I think the answer may be later in the letter, so moving on… They next state:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

Again, how is this different than assault weapons that we already have today? I would argue that it is actually easier to make or buy an AK-47 and get a human to use it than an AI-controlled device with the same firepower. Unfortunately, it is humans that are a dime-a-dozen. Certainly first-world countries are the most capable of doing this, but it seems ignorant to say that the potential harm from AI-controlled rifles is anything like the harm of nuclear weapons.

So why are they so concerned about AI-weapons, as opposed to automated killing in general? Perhaps it is explained here:

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

Notice that the term “AI weapons” is used, as opposed to “autonomous weapons”. This is a serious error, in my opinion. There are a few more problems with the above statement. First, it does not clarify the real issues with chemical weapons which are the facts that: (a) they are a form of mass destruction, and (b) they are grossly inhumane (with regard to the physical trauma). Neither of these characteristics has anything to do with “AI weapons”, or even “autonomous weapons” for that matter.

A second problem is how the statement equates the primary physical weapon (gas/chemical vs. bullets) with other technology that is used to enhance the weapon. These are two completely different aspects of a weapon. Thirdly, they seem to think that using AI in a weapon is critically different than using any other technology. Why do you think the IEEE doesn’t oppose using electronics in weapons? Fourthly, what about potential positive uses of AI in weapons? Would they oppose using an AI algorithm to help a soldier properly identify that a person’s face is an innocent civilian instead of the military target? Their argument is incorrect mostly because they are using the term “AI” to mean “autonomous”.Finally, we see that the authors are afraid that AI weapons will be bad PR for the AI world. Yeah, that really pulls at my conscience. I feel so afraid for them – not! Come on – really?

Like my last post, I seem to be straying from the neuroscience. Never fear – it’s here… I still claim, as in my last post, that machines are more likely to be better at making important, complex decisions. But what about brain-controlled interfaces (BCI) for weapons? I would argue that is the real equivalent of the “Kalashnikovs”. It increases the automated killing power of a single human to the next level. Making it easier for humans to kill is the most significant threat, and arguing about AI is ridiculous. This open letter by the Future of Life Institute seems to really miss the target (pun intended). Too bad they didn’t have an AI writer for their letter.

Ethical Intelligence

Some recent events (at least 3) have coincided regarding the debate over whether artificial intelligence, especially in robots, can handle the ethical questions. This is nothing new. Isaac Asimov’s 1942 story “Runaround” introduced Asimov’s famous three laws of robotics. The first interesting event is that Runaround was set in the year 2015! I didn’t remember that, but it was pointed out by a new Nature article by Boer Deng entitled “Machine ethics: The robot’s dilemma”. That’s the 2nd recent event. I thought about writing this post when I saw Deng’s article, but I didn’t have much to say until the 3rd recent event which was an article in, of all things, the Costco Connection entitiled “Is AI a good thing?”. They took a poll, by the way, and 36% of respondents answered “no”!

What really got my attention in the Costco Connection was an anti-AI editorial by James Barrat. I can’t figure out if Barrat legitimately believes what he says because his logic is idiotically outdated. For example, he says in the article, “It [AI] has the potential to threaten us with intelligent weapons, take virtually all of our jobs and, ultimately, cause our extinction.” Well, sure. So do all stupid weapons threaten to kill us, all technology threatens to take our jobs, and our centuries-old industrialization threatens to wipe us out. Nothing particularly unique about AI in this regard. I am suspicious that he seeks to lead his own anti-AI bandwagon because the pro-AI bandwagon is too full, and he needs people to buy his books. The pro-AI crowd can probably help him better by citing him as the negative example for sake of argument.

Both sides of the aisle seem to confuse the issue of technology with the separate issue of philosophy. I do agree that teaching/programming ethics into AI will be challenging. However, I disagree with the claim by Barrat-like folks that humans are actually any better at it than machines right now. Where humans clearly agree on right and wrong, the programming is straight forward. The Nature article by Boer Deng mentions a discussion at the Brookings Institution that involved questions such as these:

“What if a vehicle’s efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?”

Humans have been debating this issue long before Google announced its plans to build an autonomous car. It’s known as the Trolley Problem. So it’s not really the logic of the rules that is the problem. Personally, I side with the pro-AI crowd that believes AI is likely to be better than humans in all areas (cars, weapons, etc.) simply because, as others have noted, the machines can be more consistent and transparent.

So what about the neuroscience? I think this blog is supposed to cover that topic, right? There is an interesting aspect to the issue of ethics intelligence, and it involves the parallel physiology of the brain. Traditionally, expert systems, including an “ethics” program, are thought of as if/then hierarchies of logic. However, animals and humans learn to make decisions without such computation. From a mathematical view, there is a probabilistic nature to it. Kresimir Josic has a nice post on Bayesian Inference and how the brain seems to employ this approach. He has also done research on this himself, as described in this post.

So how does the brain do such computation? The biological neural network of the brain is massively parallel, able to encode complex computations. It is ideal for Bayesian inference because it is trained over time through observation, always adjusting to statistical data. The “rules” are there, but they are probabilistic, not logical. I feel that building ethical AI is a useful challenge because it forces us to look more deeply at the physiology and dynamics behind our own ethics. This will only help us understand it better.

Inside Out: Bad News for Neuroscience

I just saw the movie Inside Out. It is receiving praise from psychologists, but it is also being criticized by others regarding the actual neuroscience. Most of the criticism is about the physical portrayal of memory. The movie uses a combination of actual physical constructs mixed with anthropomorphic characters that embody emotional and cognitive functions. There isn’t much criticism about the abstract homonculi that are used to portray the emotions, and that is partly because the approach sidesteps any need to depict the physiology. Psychologist reviews are plentiful, such as this Business Insider interview with psychologist Nathaniel Herr.

Basically, therapists and psychologists find Inside Out to be helpful. Unfortunately neuroscience is not likely to find any help here. Neuroscientists are quick to jump on the inaccuracies in the movie about memory, and there are many. For a short, mild critique, read from neuroscientist Heather McKellar. Or try a more detailed critique by Antonia Peacocke and Jackson Kernion, two PhD students in philosophy, who do a fairly good job of addressing the neuroscience. I’m surprised the movie attempted a highly concrete explanation of memory, and it’s not clear to me why they wanted to try it. The only advantage I see is that they were able to provide some scope to extreme capacity of the brain. Clearly their goal wasn’t to inspire or teach people about memory, however.

I am surprised that there is less discussion out there about the “islands of personality”, which was another physical construct in the movie. These “islands” were unique constructions that combined aspects of memory and function together. Though still somewhat abstract, they are an interesting way to represent complex distributed neural networks. There are some definite failings in the analogy though, such as the fact that they are literally islands and only connect to a central headquarters, and also the way in which an entire “island” can be incapacitated. Still, I liked the idea of trying to visualize the physiology of what makes each person unique.

It’s too bad that the movie isn’t suited to promoting interest in neuroscience. Not only does it get the science wrong, but it even portrays the low level concepts as boring. This was done through a scene in which Joy, the group leader, tells Sadness to study the manuals that explain how the brain works. Joy appears to suggest that it would be fun to read the manuals, but since she is doing it as a form of control and doesn’t have any real interest herself, it is clear to the audience that only a misfit freak would want to learn such things. There is only one apparent attempt in the movie to ascribe any value to the knowledge in the manuals. That is when Sadness gives some hope of being able to help Joy find her way out of the labyrinth of memory banks. However, even that knowledge is depicted with disdain rather than being glorified. Sadness recites a series of left/right turns in a monotone voice, and the plan is abandoned all together when Joy finds another character (Bing Bong) to act as a personal guide instead.

Finally, after the real science is ignored or belittled, my own Sadness homonculus is also crying about how the general public doesn’t even know what part is the “science” and what part isn’t. I saw this article that claims to explain “How Inside Out Nailed The Science Of Kids’ Emotions”. It’s sad that people think the movie is actually about science and that the science in there is actually correct. There is a long history to the argument of whether psychology is a science. To me, the saddest part about Inside Out is that the audience may walk away thinking they finally understand how the brain really works. Clearly one goal of the movie was to help people understand their emotions better, and I applaud the attempt to address this in a way that promotes useful therapy and psychological treatment. However, anyone who might want to understand the real, physical reasons behind joy, anger, sadness, and fear will not find any inspiration in this movie.

Scientific brutality: animals or humans?

There is a new editorial in Nature Neuroscience titled “Inhumane treatment of nonhuman primate researchers”. I have titled this post “Scientific Brutality” because of the parallels to the highly active controversy we have today regarding police brutality. Here in Cleveland, Ohio, there is outrage over multiple cases of alleged police brutality, and the potential for violent protest has induced fear in our communities. One similarity in the battle between animal rights activists and animal researchers is the potential for extremist acts of violence toward researchers. There are many stories of death threats and other extreme acts by activists, including threats to actual patients (not researchers), such as the 2013 story of Caterina Simonsen.

The editorial in Nature Neuroscience discusses a unique case regarding Nikos Logothetis at the Max Planck Institute for Biological Cybernetics in Tubingen, Germany. What is unique is that Logothetis seems to have conceded a victory to activists by declaring that he will cease using non-human primates for research and will transition to using rodents instead. He has not changed his reasoning or ethical position, however. He simply does not want the hassle anymore.

Logothetis’s declaration letter seems to not be public yet, but he did make available a letter that responded to recent accusations about animal cruelty. (The letter is available here, and an article that links to it is available here.) In the rest of this post, I will comment on that letter. Let me first disclose that I have personally killed animals for research. I was required to be educated on humane euthanasia methods as well as my legal requirements to follow committee-approved policies for ethical treatment of the animals. I used rats for the purpose of studying how the brain controls respiration. This involved fully anesthetizing an animal before removing its brain, thus killing the animal. Like all researchers (hopefully), I consider unnecessary pain and distress to be unethical. I suspect Logothetis is opposed to animal cruelty, but I do not think his letter makes this clear.

Now I will explain what provoked the letter. A caregiver in the facility provided video footage to animal rights groups (BUAV and Soko-Tierschutz) who published a report and video. WARNING: it’s a disturbing video. They allege several abuses: severe water depravation, bleeding head implants, infections, restrained monkeys that appear to be extremely distressed, and a case in which a monkey is being violently pulled by a collar around its neck. Logothetis’s letter addresses all but the last incident, and he asserts that the video is intentionally misleading. Regarding water depravation, he states that the level of depravation is neither “distressing” nor “unpleasant”. Regarding the bleeding and infection, he claims that such incidents are very rare and that the infection was an isolated case in which they were required to attempt medical treatment before euthanizing the animal. He does acknowledge that post-operative care could be improved to further reduce such incidents. Regarding the apparently distressed animal, he asserts that the behavior was “almost certainly induced intentionally by the caregiver.” Oddly, he did not address the footage of the violent removal of a monkey from a cage, which I personally found very disturbing.

The primary issue that concerns me about the letter is what seems like an attempt to be philosophically superior rather than just sticking to the facts. The philosophy of what constitutes animal cruelty is a separate topic from that of the facts about laboratory conduct. A lack of separation between these two topics seems to be a source of confusion in many such debates.

For example, the letter suggests that financial support of animal rights is unreasonable when humans are suffering in the world. The letter contains a paragraph that begins with this:

“Donations to organizations such as BUAV or SOKO might sooth the conscience of animal lovers, but are the activities of antivivisectionists appropriate and reasonable in today’s world?”

He then delves into the tragedies of human hunger and poor sanitation in the world, even citing a decision in China to avoid establishing “strict regulations in an animal welfare law” because monkeys might receive better treatment than the humans. As another example of mixing philosophy with facts, he closes his letter with this:

“What society can ignore human suffering to promote the welfare of mice? If the ultimate benefit of patients is not considered a greater good, then we should indeed stop science and research.”

The activists are not suggesting that we should “ignore human suffering to promote the welfare of mice”. Again, I think Logothetis is confusing the debate, as opposed to clearing things up. The editorial I mentioned states,

“We are not trivializing the ethics of animal use in research. In fact, this is an issue of great concern to neuroscientists.”

The editorial also mentions a need to educate the public. Perhaps the “great concern” of neuroscientists is not so obvious to the activists. Clearly, there are plenty of animal rights activists that do not support hateful or violent acts towards researchers. They are not stupid, and they are not philosophically inferior either. It is important to deal with violent extremists, but scientists will need to help others to really see the “great concern”.

Hypothesis-free big science: is it good for you?

Sugar-free food is often marketed as being healthier. What about hypothesis-free research? “Consorting with big science” is the title of a 2014 editorial in Nature Neuroscience that praises the advantages of large-scale research collaborations in the form of consortiums.

It continues the ongoing debate about how to best take advantage of limited research funding in the midst of huge challenges in neuroscience. The usual arguments are proposed, such as the advantage of pooling funds and the inefficiency of smaller, competing laboratories that duplicate the same work instead of complimenting each other. The editorial highlights a concept that is important in viewing large-scale collaborations: hypothesis-free.

Technically, the editorial uses the term “hypothesis-free data”, which it further characterizes as “unbiased”. Data biasing is an important topic itself, but the author is really referring to data that is collected without a specific objective of benefiting one particular laboratory. The piece emphasizes the value of churning out large data sets, but it doesn’t address the risks of generating data for the sake of data. In the latter, progress may be evaluated in terms of quantity, not quality. Krešimir Josić had an interesting post titled “Can science become too big to fail?”. In a later comment regarding Obama’s Human Brain Project, he stated, “And if the goals are not clearly defined, then it really is impossible to fail.”

And what about that data biasing I mentioned? Someone must decide which methods will produce the most useful results. A risk not addressed in the editorial is the incredible waste of a massive project with flawed methods. Armies of scientific soldiers are still soldiers, not a committee, and they will march without much thought. A risk in big-data, or big-science, is that it may seek sudden, epic increases in scale, as opposed to gradual increases in scale that allow for processes to evolve. I have not clearly seen evidence of plans to control the growth.

In my last post, I pointed out the controversies revolving around Henry Markram and his large-scale projects. Part of the controversy, like much of the global debate, concerns the methods and types of data that will be produced. One could say that Markram has a methodological hypothesis, and his detractors don’t agree with it. I feel that the real misconception in the philosophy of big-data is that the scientific world just needs the data – and tons of it. I disagree. What we need are methods that enable as many researchers as possible to collect their own data. Certainly a major effort in big-data is to develop new technologies to acquire the data, but not with the intent to let everyone use those technologies.

As a computational neuroscientist, I am happy to say that government-funded, publicly-accessible supercomputing facilities are recognized as a general tool that is worth major investment. I am not aware of any such government-funded infrastructure for private physical scientists who require expensive techniques. Having experience in an electrophysiology lab, I realize it would be challenging to have some sort of “public laboratory”. However, some neuroscience labs function as data factories already, employing dedicated technicians for every aspect. Smaller labs may be able to outsource work to other labs or use commercial companies, and I admit that I don’t know if this could be improved upon with a public laboratory.

To summarize my points, I feel that the focus of creating big-data is a poor concept. There is no truly “hypothesis-free data”. So let’s acknowledge that the methods really do matter, and that it would be better to focus on creating publicly accessible tools. Then the big data will come anyway, and it is likely to be far more explosive, quantitatively and conceptually, than anything we can conceive of.

Peacekeepers in the Family Fight

The Organization for Computational Neurosciences (OCNS) overall is a supportive family that truly knows where it came from. I just attended an annual meeting, and I continue to be amazed by the excellent balance of computational and experimental concepts. The organization’s journal and meetings have stayed true to the general field of neuroscience, as opposed to allowing them to be narrowed toward other related fields such as artificial intelligence.

Yet all families have their share of internal fights, and OCNS is no different. First a small example, and then a very significant one. My small example is that of a senior researcher who consistently makes a point at annual meetings to criticize my posters to my face. He has done this three times! No time spent reading the whole poster or listening to my presentation. No thoughtful questions. Just an immediate criticism. Doesn’t really bother me though because he quickly walks away and the matter is completely over. And like all healthy families, others quickly step in to defend me. Oh well, there’s one in every family.

Now for the important example: the controversial Henry Markram. Dr. Markram was scheduled as a keynote speaker at the OCNS meeting and has been at the center of, not just one, but two controversially massive projects in computational neuroscience. The past controversy was the Blue Brain Project, and the more recent is the Human Brain Project. If you aren’t familiar with these, you can find plenty of info by Googling “henry markram controversy”. Here I’m only going to comment on what I saw in the OCNS family.

I was excited about hearing the inevitable discussion first hand during Markram’s visit. Unfortunately – and ironically – Markram’s home was burglarized, and his passport was reportedly stolen! However the brave Sean Hill filled in. He responded well to comments and questions about the open letter protesting the Human Brain Project. During the Q&A period following his talk, there were a few of the standard critical comments made in the open letter. What caught my attention most, however, were the public reminders by others that we are still a family. Besides the comments during Q&A, Frances Skinner started her keynote talk the following day with this idea.

I am sensitive to the peacekeepers in a family, partly due to my own family’s troubled history. I have typically found it easy to see both sides of a story, but I realize that isn’t easy for everyone. Civil debate and disagreement are actually healthy in a family. Every organization needs opinionated, assertive members. However, just enforcing rules of civility is not enough to make progress. The peacekeepers play an important role in helping both sides of an issue to move forward. It is my pleasure to see that there are peacekeepers alive and well in the OCNS.

Losing traction from retractions

Some colleagues of mine were surprised when I told them about two retractions from the Journal of Neuroscience in the past 6 months. Out of curiosity, I looked for retractions in other neuroscience journals. I was surprised to find four others for 2013 – 2014, though they are from journals of lesser impact than J Neurosci. A list of all six retractions appears at the end of this post, most of which were found using retractionwatch.com. (For those not familiar with scientific publishing, see this Wikipedia article on retractions.) First, I’ll remind you of two famous stories of retractions from Science magazine, one of the highest profile journals in (of course) science. First there is former stem-cell star Hwang Woo-suk who had retractions for two Science articles in 2004 – 2005 as part of his international demise. More recently, there is psychological shocker Diederik Stapel whose most famous retraction was from 2011. These cases remind us of how bad it can get.

I won’t comment on the issue of fraud, though Kresamir Josic has a nice commentary on the temptation of high impact journals and how it may lead to fraud. There are different causes and ramifications for retractions. Blatant fraud is certainly the most alarming and destructive. I mainly want to point out the inherent danger of more innocent mistakes that may or may not be caught after an article is published.

In my search for neuroscience retractions, the cases I found are mostly due to methodological errors in the analyses. However, it is important to realize that a bad experiment creates bad (but very real) data. This is very relevant in computational neuroscience where we manufacture our own data. The famous cases of Hwang Woo-suk and Diederik Stapel involved fabricating data that didn’t exist. More commonly, scientists can make the innocent mistake of incorrectly designing/performing an experiment. In computational neuroscience, the experiment is often a computer simulation.

There are basically two ways in which a researcher discovers a mistake in an experiment. The most common way is by getting unexpected results, either good or bad. If the results seem too good to be true, a good researcher will find out why, and a poor researcher will hopefully get stopped by someone else such as a reviewer, PI, etc. If the results are disappointing, one actually hopes it WAS a mistake in the experiment!

A second way a researcher may discover a mistake is by explaining the details of the methods in the article. All authors have had the experience of not fully understanding their own methods until they had to write everything down in detail. The process of explaining can reveal issues to the author or someone else such as a reviewer, PI, etc. The issue of poor reviewers is a different topic by itself, and I’m not going there! However documenting methods is where computational neuroscience could better police itself… but does not.

The problem is when methods are incorrectly or insufficiently explained. Just saying that you used standard method X, or citing an algorithm paper, does not mean you did it correctly. I said that computational neuroscience could police itself better because, unlike wet lab experiments, simulations should be perfectly reproducible in every detail. There has long been a movement in computational neuroscience to make models publicly available, but that practice is far from being the norm. Likewise, source code for analyses is rarely provided in any scientific field.

I have hope that things will change. PLoS is revolutionizing the publishing world in many ways, one of which is an open access data policy. That is clearly the best place to start (with the data), and perhaps some day the norm will be to provide analysis source code with it.

Finally, below is my list of neuroscience retractions during 2013 – 2014, mostly thanks to retractionwatch.com. You can look all of these up on retractionwatch.com where you may find more details as well as retractions in other fields besides neuroscience. Notice that in most cases, the problem is claimed to be an analysis error.

#1. The Journal of Neuroscience, December 11, 2013
Publication date: December 7, 2011 (2 years earlier)
Reason for retraction: The Journal of Neuroscience received a report of an investigation… that describes substantial data misrepresentation.
Article: Zeng et al., Epigenetic Enhancement of BDNF Signaling Rescues Synaptic Plasticity in Aging

#2. The Journal of Neuroscience, April 2014
Publication date: May 2013 (1 year earlier)
Reason for retraction: The authors report, “…we discovered errors in the quantification of the expression and/or phosphorylation of a subset of signaling pathways…. Despite these errors, the major conclusions of the paper remain substantiated.”
Article: Li et al., Elevation of Brain Magnesium Prevents and Reverses Cognitive Deficits and Synaptic Loss in Alzheimer’s Disease Mouse Model

#3. Cerebral Cortex, August 2013
Publication date: November 2012 (9 months earlier)
Reason for retraction: fMRI data… were not analyzed properly.
Article: Braet et al., The Emergence of Orthographic Word Representations in the Brain: Evaluating a Neural Shape-Based Framework Using fMRI and the HMAX Model

#4. Brain, Behavior, and Immunity, February 2014
Publication date: August 2013 (6 months earlier)
Reason for retraction: The merge of laboratory results and other survey data used in the paper resulted in an error regarding the identification codes.
Article: Kern et al., Lower CSF interleukin-6 predicts future depression in a population-based sample of older women followed for 17 years

#5. Glia, March 2014
Publication date: December 2013 (3 months earlier)
Reason for retraction: Some of the results in Figures 1C, 4A, 4C, 5A, 5C and 7A-D were incorrect and, therefore, misleading.
Article: Morga et al., Jagged1 regulates the activation of astrocytes via modulation of NFkB and JAK/STAT/SOCS pathways

#6. Frontiers in Human Neuroscience, December 2013
Publication date: June 2013 (6 months earlier)
Reason for retraction: Systematic human error in coding the name of the files…. The final result of the paper… is therefore not correct.
Article: Chavan et al., Spontaneous pre-stimulus fluctuations in the activity of right fronto-parietal areas influence inhibitory control performance

 

Follow

Get every new post delivered to your Inbox.