At Science and Religion News, Salman Hameed writes a response to a talk by P. W. Singer about the future of robots in war:
This is the time to think about the moral and ethical issues in the development of these robotic weapons. There needs to be a broader discussion of the implications of robots in war. We also seriously need to update laws regarding the conduct of war. This is not a hypothetical issue. There are increasing number of drone attacks in Pakistan (there was one this morning that killed 13 people). Yes, the Pakistan government has given a silent nod to these (do they have a choice?), but who gets the blame for civilian casualties which accompany almost every strike? In this kind of war, who gets to decide who is a civilian and who is a combatant? All of these questions are aside from assessing the long-term effectiveness of this remote-control war.
Read the full post here, and watch the very interesting talk Singer gave, about the future of robotic war and the accompanying ethics, below:
For an additional critical perspective on military robots, see Eric Stoner’s “Attack of the Killer Robots” in In These Times. See also Stoner’s interview with Singer at Mother Jones.
Fascinating stuff. Singer is absolutely right that robotics will change, is changing, the face of terrorism. But I think there are also important questions to be answered about how issues of robotic warfare and terrorism mirror each other in their dramatization of a fundamental instability at the heart of civil(ian) life. Arjun Appadurai has argued in ‘Fear of Small Numbers’ that part of what is so disturbing about contemporary terrorism is that the suicide bomber thrives in the spaces of civilian life, that the bomber constitutes the surprise from within. Society recognizes the suicide bomber as one of its own at the very moment at which he or she threatens its viability. In a world increasingly mechanized, where the presence of the robot will become more and more normalized, we will have to face an analogous surprise recognition, where the faceless civility of our lives, what we take for granted as civilian, becomes an enemy in the moment of destruction. In the west we like to characterize the suicide bomber as a kind of non-person—an indoctrinated, anthropomorphic representation of an ideology we don’t like. At the same time, we feel the need to adequately explain the person of the bomber, to find out where he or she may have received their schooling, how they planned their attack, what messages they left behind. Singer asks us to try to come to terms with the reality of robotics technology. But could we ever come to terms with a self-destructing terrorist robot, a robot that truly is nothing more than an instrument of ideology, a robot that cannot be characterized and can only have a minimally accessible history? Singer’s depiction of the new ethics at work in the new terrorist robot—you don’t have to convince a robot that they’ll see 72 virgins when they die to convince them to blow themselves up—is terrifying. Just as robotics bypass the face-to-face experience of human loss in war, the same goes for robotics in terrorism. The question of the degree of indoctrination amongst terrorists therefore becomes even more fraught. Above all though, the relationship between robotics and terrorism asks us to rethink the politics of shared suffering. If one of the more beneficial aims of the war on terror might be, according to Judith Butler, a new focus on recognizing one’s enemy as human and coming to terms genuinely with the precariousness of life, then what kind of threat does robotics pose to this endeavor? Much of the work of Butler’s ‘Precarious Life’ is to place desire at the centre of discussions of terrorism. But the future of robotics seems to offer the dark flipside of this work. The robot terrorist is at once the ultimate enabling vehicle of desire – the means by which the fantasy of the human terrorist can be acted out—and the enactment of the destabilization of desire, precisely in that it operates indifferently, even as a refutation of the desire of its victims to understand it and explain it by creating a psychology for it. We’re going to have to think more seriously about how we ‘share’ suffering in the age of robotics.
Singer lists the Terminator as the only character making it onto lists of both Hollywood’s top heroes and villains. I was surprised that a robot would be named to represent “the best and worst of humanity.” How can the Terminator represent either value, when he lacks emotion, desire, free will, reason, but merely carries out programming? Perhaps our vision of humanity, and of the human, is changing, if we see our greatest strengths and weaknesses as lying not in ourselves but in our technology.
Singer’s discussion of military robotics suggests our nation’s utopian urge not merely to supplement the fragile human body with technology (from pacemakers to prosthetic limbs), but to supplant it entirely. His talk (& most of this page’s links) refers repeatedly to science fiction, linking the reality of military robotics to fantasies of wholeness, invincibility, and immortality represented in sci fi’s imagined futures.
But sci fi fantasies, from Karel Capek’s R.U.R. to Terminator II, to Wall-E and Battlestar Galactica, project humanity onto these machines, portraying thinking, feeling, free-willed robots, robots that somehow exceed their programming, that form memories, learn to love, even to reproduce, and ultimately attain human(oid) subjectivity. In sci fi, we transcend our bodies; we’re survived by and live on through our robotic creations, our nonbiological ‘children,’ who preserve humanity’s legacy. This is not to say that science fiction’s humanized robots represent reality, but that they provide the common vocabulary with which we imagine and express the potential of robotics—and reveal implications about our shifting relationship to our bodies and what it means to be human.
Military robotics express this urge towards invulnerability. Rather than accepting what Judith Butler calls our “primary human vulnerability” to a “range of touch” that we cannot fully control (Precarious Life, 30), we recoil from touch, from our bodies and our humanity. Robots attempt to fill the gap between our bodies and an ideal of wholeness, replacing our permeable flesh with indestructible boundaries made of metal and wires (as we see in Singer’s “Robot vs. Suicide Bomber” slides).
Robots of war attempt to distance us not only from the battlefield, but from these robots’ antithesis: the suicide bomber. As William discusses above, Arjun Appadurai describes this figure’s nightmarish “explosive body that promises to distribute its own bloody fragments” among the civilian population, producing “a horrible mixture of blood and body between enemies, thus violating not only the soil of the nation but the very bodies of the victims” (Fear of Small Numbers, 77). The military robot, as Singer describes, does not get upset if its buddy is killed, and does not require promises of martyrdom—it behaves predictably (even “oops moments” can be explained and corrected.). The suicide bomber invokes fear with his very permeability and his ability to permeate—by suggesting the total vulnerability of the human body to violence.
Military robotics reveal an impulse to protect ourselves from this corporeal vulnerability by keeping our bodies out of war, operating at a distance. Singer’s images of prosthetics technology and discussion of “war porn” reveal that this trend has non-military implications. Anyone with a computer can watch the war from a distance that highlights their own safety, losing what Singer tellingly calls “the humanity” of these experiences. We turn the reality of violence into entertainment, remaining, literally, untouchable.
All this might suggest the opposite of sci fi’s fantasy of humanized robots: that we’re moving toward an (in)human limit, dehumanizing ourselves. We’ve turned war into virtual reality (against the ‘nightmare’ of the suicide bomber); we see through robots’ camera eyes and move them with glorified video game controllers. Perhaps sci fi’s humanized robots allay our fear of our own technology—a fear of disconnection, to which Singer alludes, from any sense of shared humanity (demonstrating that “American lives” count at others’ expense). Perhaps if we don’t coat robots with human flesh, all that remains is the incomprehensible, frightening jumble of metal and wires of the flayed Terminator in his final horror: neither hero nor villain, just programming.
Maybe our very vulnerability, not our technology, represents “the best and worst” of humanity. Singer notes a higher incidence of PTSD among the military’s remote drone operators, suggesting vulnerability even at a distance. He quotes a news editor in Lebanon who undercuts the power of military robotics to inspire fear: Americans “are cowards because they send out machines to fight us. They don’t want to fight us like real men…So we just have to kill a few of their soldiers to defeat them.” From this perspective, technology makes us less human, “cold-hearted” and not “real men,” but it also reveals that as long as we remain embodied, we remain vulnerable. The human body remains the primary, vulnerable target of violence…and this should make us think more about our ethical responsibility for the violence we perpetrate, even at a distance.
I recently read an article about this in Scientific American and it really struck a chord with me. When I initially heard the reports about the effectiveness of the Drones and other robots in our employ, I thought it was great. We were able to kill the “evil doers” without putting our own men at risk. A wonderful prospect, but I worry about where this technology is headed. Mainly, I think it makes it FAR too easy for us to go to war and stay there for as long as we want. The major deterrent to gong to war has always been the cost of life. It is scary to imagine how much easier it would be to make the decision to fight if we know that the cost to our side will be minimal. War will more than ever be based on economics. The only question we will need to ask ourselves is if we have the funds to pay for enough robots to fight. Instead of a draft in times of war, there will simply be a tax hike. Before, the playing field was somewhat level. As we have seen in the “War on Terror” and Vietnam, the smallest, poorest countries can still resist defeat by simply wearing down and waiting out the civilians of the invading country. War weariness is a result of people being upset that our soldiers are dying fighting an unjust war for corporate greed. But how many less of us would care if none of our soldiers were dying over there? Would the other countries casualties even be in the news? I just think that in a world with massive standing armies where it is already too easy to enter into never-ending conflicts, anything that makes the prospect of war more palatable is a slippery slope.