05/10/2018 / By Isabelle Z.
A recent announcement from the U.S. army about new drone technology is adding a new dimension to the ongoing ethical debate about drones. The army said it’s developing drones that have the power to target people and vehicles using artificial intelligence, taking human involvement entirely out of the equation.
Military drones are currently controlled by people, but this move toward militarizing artificial intelligence does not bode well for society. Security expert Peter Lee points out in The Conversation that warfare could move from fighting to outright extermination, and it could also have the effect of making scientists, engineers and firms building artificial intelligence military targets.
International humanitarian law says that facilities considered “dual-use” because they develop products for military applications as well as civilian ones can be attacked under certain circumstances. By extension, if AI software produced by Google was used in an autonomous drone for the American military, Google could be subject to enemy attack.
In fact, the military’s use of TensorFlow AI software developed by Google in drones has led to internal divisions at the company. Some employees were very angry when they found their AI was being used for these purposes. Reports say it is already used in the field for surveying areas controlled by Isis in the Middle East, although Google is quick to caution that it’s only being used for “non-offensive uses,” such as flagging images for review by human analysts.
While drone use in general raises a lot of ethical questions, many operators take their responsibility very seriously. When a pilot fires a missile or drops a bomb, human operators guide it to the target with a laser. One MQ-9 Reaper operator stated that he would rather let an important target get away than take the chance that civilians could be killed. Would a machine be capable of this sort of reasoning?
Drone operators are vulnerable to post-traumatic stress disorder and mental trauma just like soldiers in the field; their killing might be remote in nature and more detached than looking someone in the eyes and shooting them, but a life is taken nevertheless. In fact, some proponents of autonomous killing drones cite these psychological problems as a reason in favor of the technology, but it’s important to keep in mind that military personnel and intelligence specialists who analyze graphic drone strike footage can also suffer from psychological damage.
Lee said that he spoke to more than 100 Reaper crew members, and they all said they felt humans should be the ones pulling the final trigger.
There’s also the delicate matter of knowing when the self-learning algorithm has reached the point where it can be trusted to make its own decisions. In this case, how many civilian deaths will be considered acceptable during the technology’s refinement?
If you’ve been following the development of autonomous vehicles in the news, you probably have another question: What else could go wrong? Uber and Tesla’s self-driving car experiments have resulted in fatalities, and it’s not hard to imagine that computer bugs in autonomous drones could lead to an even higher body count.
There are lots of considerations here, and AI brings the already questionable practice of using military drones to new levels of controversy. Will the move shift warfare from fighting toward outright extermination? If a drone is targeting your area, who do you want to make that final decision about whether to pull the trigger and take your life: a machine trained to kill based on algorithms or human beings with consciences?
Sources for this article include:
Tagged Under: artifical intelligence, autonomous tech, battlefield technology, combat drones, depopulation, drones, future of war, future tech, military, military drones, military technology, Reaper, surveillance, TensorFlow