Autonomous Killer Drones On The Loose: How It Can Happen
Most people will probably find it kind of ridiculous if someone suggests, with a straight face, that the machines will one day “rise up” against us, essentially realising some of the scenarios presented in pop scifi movies like The Terminator and I, Robot.
So I am not going to say that. What I am going to say, however, is that we will likely have a problem with robots that isn’t entirely unrelated to these types of scenarios. Worst of all, the problem will be robots that kill. They will kill human beings, on their own, and according to their own judgments.
Creating The Unreadable
To understand why is this a realistic scenario we can take a look at what already happened when we developed a complex technology and became very reliant on it. In an article about machine intelligence running more and more of our world I’ve referred to a TED speech by Kevin Slavin called “How algorithms shape our world”. In it he describes “black box trading”, or trading that is controlled by algorithms rather than by direct human decision making. Here’s an excerpt from my article:
“Kevin Slavin points out that what is often happening is that we are “creating the unreadable”. It is next to impossible to predict what these financial algorithms are going to do next, yet they run one of the most important processes in the world”
The reason these algorithms were put in place is because they are more efficient. They are expected to make better decisions in our favor faster. But the result, that some might not be all too willing to admit, is that we kind of lost control over the process.
“Creating the unreadable” is a key quote here, because it testifies to what happens when two aspects of our technology combine: greater efficiency and greater complexity. Efficiency is what lures us into depending on technology in the first place. It can do more, better, and with less (time or other resources). And complexity is the result of such technology advancing to become even more efficient, and even better.
The problem is that as technology becomes more complex the number of people who can understand and control it diminishes, yet this complexity typically means that the technology continues to operate on its own regardless of whether you understand or control it.
Autonomous Drones Are Coming
Now translate this issue to the world of drones and drone warfare. Pentagon has just said that it wont let drones make the decision to kill, but I don’t give such promises much long term credit. It’s just a current specification. It’s not set in stone, and it doesn’t even mean more automation, artificial intelligence, and judgment *capability* wont be built into developing and future drones. It’s just a matter of time before this promise starts sounding hollow.
It should be a foregone conclusion that future drones will be capable of judging who to kill, even if only by preprogrammed specifications.
The Motive Behind Giving Drones Total Autonomy
But why would human operators ever let a robot decide who to kill in their wars? It’s quite simple really, and runs in the same strain of reasoning that has brought drones into warfare to begin with. In a nutshell: so humans don’t have to do it themselves. And humans don’t want to do what is dangerous or dirty. This also just happens to give the side using such drones a strategic advantage.
Making a decision to kill isn’t exactly something that leaves a mentally healthy human being all warm and fuzzy inside. That too counts as a “dirty job”, a job that if they had a choice they’d probably want to leave to someone else, or to something else.
In fact, this psychology is at play throughout the military and government social structures, and possibly the number one reason why human beings are capable of committing atrocities that they would individually almost universally find, well, atrocious. It is the outsourcing of responsibility. You have the a person of authority making the decision to kill, but this responsibility is somewhat eased by him not having to actually pull the triggers and see the persons die. This makes it easier for him to give the order.
The soldier who does pull the trigger can console himself with the fact that he didn’t make the decision to kill, that he was just following orders, in addition to other mental constructs (also largely coming from his superiors) that ought to justify the killing. The Milgram Experiment, which shows that humans are capable of inflicting a lot of pain on others in deference to authority, only adds to this argument.
But alas, while the strategy of sharing responsibilities and deferring to authority of others in a hierarchy of individuals somewhat dulls people’s sensibilities towards the atrocious things they do, it does not completely eliminate them. There is still a place for guilt, remorse, or just haunting images. If war really has to be done, one might then ask, isn’t there a way to do it without suffering through any of this?
Enter drones. Why not just let drones do it all? Not only can you wash your hands of all the dirty jobs being done by them, but this also makes it somewhat more politically acceptable, as a significant lack of opposition to Obama’s drone program has shown.
That’s how it will happen. The men in charge ultimately care about strategic advantages, about gaining more ground with minimum risk and minimum usage of resources. The need for a human to make certain decisions not only brings with it the emotional cost, but also the margin for strategic error, partly influenced by the emotional problem. It is not hard to imagine that some will become convinced that giving drones full autonomy, so long as they work for us, is a good idea!
But at that point, we may be on our way towards “creating the unreadable”, as Kevin Slavin put it, by giving up control to an increasingly complex technology until we no longer know exactly how or why it does what it does. All we know is that we depend on it, and that there really isn’t such a thing as a master off switch.
It is Just Following Orders
But I suppose some might still say that this is acceptable so long as the drones do our bidding. US drones successfully kill only by pre-programmed specifications, or don’t kill if there’s no need according to these specifications.
There are two ways this could easily go awry, and all of them have to do with those specifications themselves.
1. Specifications Allow Targeting Our Own As “Enemy Combatants”
It is likely that the drone programming will reflect the ongoing policies of the government and the military, and at least one policy, the NDAA, actually turns against the american citizens themselves by allowing the government to detain them indefinitely without trial. And everyone who knows about NDAA knows how vague it is, and how easily you might find yourself targeted.
Now imagine such programming being embedded into a drone. Not only does it already explicitly allow for the targeting of our own (speaking from an american perspective for the moment), but the facts that could trigger such targeting are vague enough that pretty much nobody can assume complete safety. Much in the same way that we are losing control of our own governments, we can lose control of our own drones. To most of us they will be, for all intents and purposes, on the loose and likely out to get us.
2. Drone Unexpectedly Interprets Given Specifications Differently
This argument was elegantly espoused in the movie “I, Robot”, where robots did indeed rebel against humans, but they were not buggy. They were following the program as intended. It’s just that by the logic humans originally programmed them with, humans themselves fulfilled the conditions necessary for becoming a target.
Who is to say this cannot happen? It is especially possible if the drone is programmed to target by certain principles, and not solely by national or group associations (though even the identification of those can go awry). If it follows certain principles rather than just associations then theoretically anyone, even those whom the drone is supposed to be working for, can cross certain lines and become a target.
No Need for Self-Awareness
Thanks to science fiction many associate the so called “rise of the machines” with them becoming self-aware, coming to hate human beings, and then being driven to eradicate them. However, this obviously need not be the case for us to have a somewhat similar scenario. These drones don’t need to be aware of what they are doing nor have any motives of their own. It is enough for them to become complex enough to be hard to understand and unwieldy to control, and continue to follow their complex programming, much like black box trading has become.
They simply become a runaway process, one we’d have most hope of stopping if we had a master kill switch that just shut them all down. In case of black box trading this could have untold consequences on our financial system, so we dare not do it. We are too dependent on these algorithms, even if we don’t have full control over what they do.
I doubt a kill switch for all drones is a practical possibility, so if things do go awry, the only way to solve the issue is to literally shoot them down, and depending on how far the problem would’ve progressed before we decide shooting them down is the only solution, this might not be easy. It just might be a war with our own machines.
Back To The Present
I don’t really hope that this article, or countless of others that may be written with the same warnings and reminders, will actually stop certain organizations from ultimately deploying autonomous killing drones. I don’t really think that the risk of losing control over them will be persuasive enough. They’ll try to count on the expertise of the engineers and programmers to make “absolutely sure” this can never happen, while being blinded by the potential of the technology to bring them the advantage they are looking for.
I think this reveals a more fundamental issue, that of our continued willingness to use violence, even proactively, to solve our perceived problems, or to make ourselves feel more secure. One way or another, the technology that we give birth to will only reflect who we are, and how we do things. If we think war can be good; we will make machines that outstrip us in every way in that capability. If we think peace should be a priority, then we’ll make machines that think and act the same way.
The problem isn’t, and never was, the technology itself. The problem are our ideas and philosophies; the programs running our own minds. Technology just amplifies what we do as a result of our own mental programming. If technology is a kin to a child of humanity, and the parents were abusive, we can expect the child to learn to be abusive as well.
Comments - No Responses to “Autonomous Killer Drones On The Loose: How It Can Happen”
Sorry but comments are closed at this time.