Report: AI Used by US Forces to Help Identify Hostile Targets in the Middle East
The US military has confirmed its usage of Artificial Intelligence (AI) technology to identify enemy targets in recent airstrikes in the Middle East....
Facts
- The US military has confirmed its usage of Artificial Intelligence (AI) technology to identify enemy targets in recent airstrikes in the Middle East.1
- The admission came from Schuyler Moore, the chief technology officer for US Central Command, who said that AI technology, particularly computer vision algorithms, have been key to the implementation of over 85 recent US air strikes.1
- Moore claimed the strikes in question were conducted by US bombers and fighter aircraft against seven facilities in Iraq and Syria on Feb. 2. The Biden administration has confirmed that the strikes targeted rockets, missiles, drone storage facilities, operation centers, and other targets in retaliation for the Jan. 28 attack against a base in Jordan that killed 3 US servicemen. The US has blamed the Jan. 28 attack on Iranian-backed militias.2
- Moore said AI technology has also been used to identify rocket launchers in Yemen and surface vessels in the Red Sea, several of which were later the targets of air strikes. Over the past year, Moore said, US forces have experimented with using algorithms that can locate and identify targets using imagery from satellites and other data sources in trials, with the technology coming into operational usage following the Oct. 7 attacks in Israel.3
- The target-finding algorithms were reportedly developed under Project Maven, a 2017 Pentagon initiative focused on advancing the Department of Defense's usage of AI technology for defense purposes, particularly in the US fight against the Islamic State.3
- While the US military has previously confirmed the usage of computer-vision algorithms for military intelligence purposes, this is the first confirmation of its usage to select and engage with enemy targets.3
Sources: 1Business Insider, 2Bloomberg and 3Eurasiantimes.
Narratives
- Establishment-critical narrative, as provided by Public Citizen. Military usage of AI technology is still very new and it has a huge capacity for abuse that could result in unintended crisis escalation and civilian casualties. Applications of this technology in military contexts are still largely untested and unregulated, with the threat of data manipulation like deep fakes posing potentially catastrophic threats.
- Pro-establishment narrative, as provided by Bloomberg. While the military using AI to find enemy targets for air strikes may sound scary, the reality is that humans are involved in every step of the process. At no point are the algorithms allowed to run wild without supervision, and human operators are constantly working to ensure that the technology is used safely and responsibly. While these algorithms are being used to find targets, humans are still the ones verifying the targets and deploying the weapons.