Skip to content

Unintelligent Arguments Against Intelligent Weapons

July 27, 2015

After my last post on ethics intelligence, I found out about a public statement against autonomous weapons that has been endorsed by Stephen Hawking, Elon Musk, and Steve Wozniak, among many other notable personalities. Written by the Future of Life Institute, it is titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”.

First I will say that I do NOT disagree with the idea of limiting warfare, and I value the importance of self-limiting our ability to kill. However, I strongly disagree with the reasoning used in the letter, and it strikes me as being more of a self-serving public relations ploy than as a rational attempt to protect human life. The letter has many disturbing statements, so I will address them one by one.

At the beginning, I feel they confusingly equate the term “AI” with the concept of “autonomous”. This is a significant mistake. The letter begins with this:

Autonomous weapons select and engage targets without human intervention.

This has no dependence on AI and does not include the use of AI technology to help a human endorse the killing. However, they seem to imply throughout the letter that they oppose the development of any weapon that incorporates AI. They go on to state:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

It’s not clear to me whether “Kalashnikovs” refers to assault weapons like the AK-47 (the most likely meaning) or the inventor Mikhail Kalashnikov. In either case, it seems ignorant to think that the U.S.A. and other first world countries do not already have the technology that meets their definition of autonomous weapons, regardless of whether such weapons are being used. More importantly, what makes AI-enabled weapons so much more dangerous than a weapon that has auto-firing capabilities? Why aren’t they opposing the use of assault weapons? I think the answer may be later in the letter, so moving on… They next state:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

Again, how is this different than assault weapons that we already have today? I would argue that it is actually easier to make or buy an AK-47 and get a human to use it than an AI-controlled device with the same firepower. Unfortunately, it is humans that are a dime-a-dozen. Certainly first-world countries are the most capable of doing this, but it seems ignorant to say that the potential harm from AI-controlled rifles is anything like the harm of nuclear weapons.

So why are they so concerned about AI-weapons, as opposed to automated killing in general? Perhaps it is explained here:

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

Notice that the term “AI weapons” is used, as opposed to “autonomous weapons”. This is a serious error, in my opinion. There are a few more problems with the above statement. First, it does not clarify the real issues with chemical weapons which are the facts that: (a) they are a form of mass destruction, and (b) they are grossly inhumane (with regard to the physical trauma). Neither of these characteristics has anything to do with “AI weapons”, or even “autonomous weapons” for that matter.

A second problem is how the statement equates the primary physical weapon (gas/chemical vs. bullets) with other technology that is used to enhance the weapon. These are two completely different aspects of a weapon. Thirdly, they seem to think that using AI in a weapon is critically different than using any other technology. Why do you think the IEEE doesn’t oppose using electronics in weapons? Fourthly, what about potential positive uses of AI in weapons? Would they oppose using an AI algorithm to help a soldier properly identify that a person’s face is an innocent civilian instead of the military target? Their argument is incorrect mostly because they are using the term “AI” to mean “autonomous”. Finally, we see that the authors are afraid that AI weapons will bring bad publicity for the AI world. The larger issues of ethics and human welfare are important in the debate they address. I feel the matter of public relations, however, is irrelevant and may reveal one of the significant motivations behind the public statement.

Like my last post, I seem to be straying from the neuroscience. Never fear – it’s here… I still claim, as in my last post, that machines are more likely to be better at making important, complex decisions. But what about brain-controlled interfaces (BCI) for weapons? I would argue that is the real equivalent of the “Kalashnikovs”. It increases the automated killing power of a single human to the next level. Making it easier for humans to kill is the most significant threat, and arguing about AI is ridiculous. This open letter by the Future of Life Institute seems to really miss the target.

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: