Report: Pentagon Boosting Arsenal of AI-Powered Autonomous Vehicles
After testing out small, artificial intelligence (AI)-powered autonomous surveillance drones in special mission operations and the war in Ukraine, the Pentagon is reportedly seeking to field multiple thousands of AI-enabled autonomous vehicles by 2026 to stay on par with the PRC....
Facts
- After testing out small, artificial intelligence (AI)-powered autonomous surveillance drones in special mission operations and the war in Ukraine, the Pentagon is reportedly seeking to field multiple thousands of AI-enabled autonomous vehicles by 2026 to stay on par with the PRC.1
- This comes as part of the ambitious Replicator initiative, which is expected to 'galvanize progress' and accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems. The Pentagon has over 800 unclassified AI projects, much of which are still in testing.2
- The US is expected to have fully autonomous lethal weapons within the next few years, but it's unclear whether any system for deployment is being formally assessed as required by a 2012 directive. Several other countries are working on the same technology; neither China, Russia, Iran, India, nor Pakistan have signed a US-initiated pledge to use military AI responsibly.1
- This comes as Forbes has reported that while the US still mulls how to use AI-driven weapons, Ukraine has already put them in place against Russia. The Saker Scout drone can find, identify, and attack 64 kinds of Russian military equipment with minimal human involvement.3
- Meanwhile, retired Air Force Gen. Jack Shanahan — the inaugural Pentagon AI chief — argues that the only fully autonomous weapons systems that can be trusted are wholly defensive, such as the Phalanx anti-missile system on ships, citing worries about failures and killing noncombatants or friendly forces.2
- The US Department of Defense released a new 'incident repository' to catalog problems its officials encounter with artificial intelligence as part of the broader 'Responsible Artificial Intelligence toolkit' in mid-November, following the establishment of 'Task Force Lima' to explore generative AI.4
Sources: 1Associated Press, 2Washington Post, 3The western journal and 4Defensescoop.
Narratives
- Establishment-critical narrative, as provided by Icrc. As the military use of AI-powered autonomous drones inevitably expands, governments across the globe must promise to use them only in a limited capacity. These weapons should be prohibited from targeting areas containing humans — ideally, their scope would be narrowed to just military objects. This technology makes on-the-battlefield decisions without human input, which means the humans who build them can and must program them to steer clear of people in the first place.
- Pro-establishment narrative, as provided by Business Insider. While it's understandable to question the legality and morality of autonomous weapon systems, US adversaries like China won't care about such debates. Western countries — with the goal of defending freedom and peace — need to responsibly develop these weapons faster and in larger quantities if they hope to deter enemy regimes. These next-generation weapons are already being used in Ukraine, so it's best if democratic allies control their global use before autocrats do.