The Logical Endpoint of Autonomous (Robotic) Weapons Systems
Every few months you see a news article about autonomous weapons systems being developed by the military or this or that nation (usually the United States, but not always). These are basically robots armed with weapons. These might be drones, flying over enemy territory (with or without human pilots somewhere on the ground), or tanks maneuvering on battlefields (again, with or without remote human operators). The ultimate goal is to create robots that will operate on battlefields without human supervision, making their own decisions using artificial intelligence. The Army is excited about such prospects because they will keep American soldiers out of harm’s way and therefore reduce military casualties. Robots also have advantages like unwavering attention spans and a lack of fear.
Many of the articles about autonomous weapons systems raise questions about their reliability, especially in uncertain circumstances. Will they kill the right people, and more importantly, not kill the wrong ones? The focus here is on whether or not these systems will work as we envision them, and whether they would do a better or worse job on the battlefield than human soldiers making decisions (or than remote human drone operators making decisions). One recent New York Times op/ed piece said, “Autonomous weapons should not be banned based on the state of the technology today, but governments must start working now to ensure that militaries use autonomous technology in a safe and responsible manner that retains human judgment and accountability in the use of force.”[i] The authors later added, “Greater autonomy could even reduce casualties in war, if used in the right way. The same types of sensors and information-processing that will enable a self-driving car to avoid hitting pedestrians could also potentially enable a robotic weapon to avoid civilians on a battlefield.”
This is a valid set of concerns, in the short term, but I think that there is a much larger concern in the long term, because I think the logical endpoint of autonomous weapons systems is a war in which each side kills as many innocent civilians on the other side as possible. The very nature of war and robotics leads to this.
My reasoning is based on the progressions of wars in history. In most traditional wars throughout history, the fighting starts between two armies. Professional (or non-professional) armies confront each other. If there is a clear winner here, then the war is over. However, if there is no clear winner, then the two sides regroup and try to set up another confrontation. The second battle might be conclusive, and then the war is over. However, that might not be conclusive either, and the fighting goes on. In some cases, even apparently conclusive battles are not enough to decide the issue. The Hundred Years War is a good example, or World War II. In both cases, the early battles were decisive victories (Crecy, Poitiers, in the former, France and China, in the latter). However, these battles were not decisive enough to end the war. Both sides still fought on.
When the armies cannot deliver a knock-out blow immediately, you tend to enter a second stage of warfare: a war of attrition, in which each army tries to wear the other one down by inflicting heavy casualties, in the hope that the other side will break first. World War I is a good case here. In the West, Germany on the one hand, and France and Britain on the other, had decided by the end of 1915 that they could not outmaneuver their opponents and achieve a clear victory. This led the British to turn to quantity over quality—overwhelm your opponent with sheer force. The British tried this at the Somme, but even roughly a million casualties over five months had no effect.
The Germans turned to a more sinister strategy at Verdun: trading casualties one for one until one side ran out. The German General von Falkenhayn decided to attack the French at Verdun, knowing that it was such a strategic position that the French would have to defend it at all costs. His intention was not actually to take Verdun, though he would have been happy to do so, but rather to make the French take horrific casualties. Von Falkenhayn said he wanted to “bleed France white.” His assumption was that if the Germans and the French traded dead soldiers one for one, France would run out first. This strategy didn’t have time to run its course, because events (Russian Revolution anyone?) interfered, but it had a certain horrific logic to it.
The war of attrition is immensely bloody and painful, and, as a result, it doesn’t take armies and governments very long to start looking for a way out of it. This brings us to the third stage: the war on civilians. If you can’t defeat your enemy’s army, you target your enemy’s civilian population. This has a two-fold intention. First, it destroys the civilian population’s ability to supply the army. Second, you hope, it destroys the enemy’s will to fight. If the war is too costly in civilian lives, governments will have to end it. This is especially evident in twentieth century wars. In World War I, the British blockade starved the German population and the German U-boats tried to starve England. In World War II, the strategy was to bomb the enemy’s population from the air. The firebombing of Tokyo, Dresden, and Hamburg and the atomic bomb attacks on Hiroshima and Nagasaki were all examples of intentional infliction of massive civilians deaths in an attempt to get the enemy to surrender. The carpet-bombing of Vietnam was a continuation of this logic of punishing civilians.
It is this logic that I think will be amplified enormously by the advent of robotic weapons systems. Basically, robots are expendable and cheap, so the logic of making your enemy suffer leads away from killing robots and directly to killing civilians instead. Human life is worth more than robot “life,” so taking it inflicts more damage.
Most of the optimistic commentators on autonomous weapons systems avoid this issue because they envision a war between the United States and an enemy that doesn’t have the technology to match us. In that case, perhaps, armed robots will make war “safer,” in that robots will make the war more one-sided, and therefore end it sooner.
However, I think other scenarios are more likely. First, what if there is a war between two nations with roughly equal robot armies? What if, in fifty years, China and the United States have a war with robots? In that case, I think the initial stages of the war would be robot armies confronting each other, with the hope that one side would decisively win. Robots are easier to replace than humans, however, so destroying an army might not win the war. The loser simply builds another army and tries again.
Then you get the war of attrition, as each side produces larger and larger robot armies, and destroys them just as fast. The war of attrition might work on economic terms (one side might run out of money to build robots), but it won’t work on the emotional level—wars of attrition end because the civilians refuse to put up with the death tolls (or refuse to continue to die). However, with robots, this won’t happen. Because of this, I see one or both sides very quickly realizing that killing robots doesn’t hurt the enemy nearly as much as killing civilians.
What is the end point? If you can’t defeat your enemy’s robot army, you defeat his ability to produce robots, and inflict the most pain possible. In short, you target the civilian population. War becomes a massive game of robots hunting humans. Perhaps each side will try to stop the other side’s robots with their own robots protecting their own humans, but that isn’t much better.
A second scenario leads to the same result. Suppose, instead of two evenly matched opponents, you have an unequal contest, with one army destroying another, but the loser vowing to fight on. This quickly leads to guerilla war, with small groups of militants emerging from the civilian population to carry out surprise attacks, and then disappearing into that population again. Think of Iraq, or Afghanistan. Will robots be able to handle this? Proponents argue that artificial intelligence will allow robots to make better, quicker decisions of friend and foe, and kill only the “bad” guys. My sense, however, is that the temptation grows to simply kill the civilians, too. Look at American behavior in Vietnam when confronted with guerilla war, or French conduct in Algeria. Even when the dominant army shows more restraint, as in the American occupations of Iraq and Afghanistan, there is a temptation to simply attack the whole population in order to force them to stop harboring militants. You can only win by convincing the enemy that their best interests lie in submitting, not fighting, and if robots are your main form of “convincing,” then you edge toward robots hunting humans.
In the end, given that robots are emotionally expendable, but people are not, and given that war is about inflicting pain on one’s enemy in order to get them to give up, I cannot see how any robot wars do lot lead, in the end, to robots killing civilian humans on purpose.
Is that really where we want to go?
[i] Michael C. Horowitz and Paul Scharre, “The Morality of Robotic War,” The New York Times, May 26, 2015 (on-line). The print version appeared in the Op/Ed section on May 27, 2015.