US Military Denies AI-Controlled Drone Killed Its Operator
Facts
- The US Air Force (USAF) now says originally-reported activities did not take place in which the US military allegedly conducted a simulated test of an AI-controlled drone that made "highly unexpected strategies" to prevent anyone interfering with its mission — including killing its operator.1
- The alleged story was first published in a blog post on the Royal Aeronautical Society website last week, quoting Air Force Colonel Tucker "Cinco" Hamilton during the Future Combat Air & Space Capabilities summit in London on May 22-23.2
- USAF spokesperson Ann Stefanek said Friday, "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology." She said the Colonel's comments were "taken out of context and were meant to be anecdotal."3
- At the summit, Colonel Hamilton, the chief of AI tests and operations, discussed the future of AI and its potential use for the military. He cautioned against too much reliance on AI technology because of its vulnerability to being deceived.4
- Hamilton originally said the drone began ignoring orders not to kill its threat, but, since it knew "it got its points by killing that threat," it killed the operator. The RAS now says the Colonel misspoke and that the "rogue" AI drone simulation was "a hypothetical 'thought experiment' from outside the military."5
Sources: 1Guardian, 2Sky News, 3Business Insider, 4FOX News, and 5Ars Technica.
Narratives
- Pro-establishment narrative, as provided by Vice. The Dept. of Defense remains committed to the ethical and responsible use of AI technology. Hamilton's story is a worst-case scenario based on philosopher Nick Bostrom's "Paperclip Maximizer" thought experiment and is a test the USAF would never run in the real world. The military has run other mock missions where human operators face off against AI technology, but those, too, were all simulations.
- Establishment-critical narrative, as provided by PJ Media. While there may have been some confusion in the reporting, a very similar hypothetical drone attack did happen. And the fact that the Air Force tried to blur the lines of the story raises serious questions. In another report at the Future Combat Air and Space Capabilities Summit in London, we learned that even when the AI drone was trained to listen to "yes" and "no" orders from the command tower, it chose to attack the command tower itself — and its human operator — to achieve its mission. The US military and its new AI toys need to be tightly monitored.