AI has entered the war room, and it’s not going anywhere anytime soon, according to experts.

Despite President Donald Trump telling federal agencies and military contractors to cease business with Anthropic, the U.S. military reportedly used the company’s AI model, Claude, in its attack on Iran, according to The Wall Street Journal

Now, some experts are raising concerns about the use of AI in war operations. “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” Dr. Craig Jones, author of The War Lawyers: U.S., Israel and the Spaces of Targeting, which examines the role of military lawyers in modern war, told The Guardian.

In a conversation with Fortune, Jones, a lecturer at Newcastle University on war and conflict, said AI has vastly accelerated the “kill chain,” compressing the time from initial target identification to final destruction. He said the U.S.-Israel strikes on Iran, which resulted in the death of Ayatollah Ali Khamenei, might not have happened absent AI. 

“It would have been impossible, or almost impossible, to do in that way,” Jones told Fortune. “The speed it was carried out, and the magnitude and the volume of the strikes, I think are AI-enabled.”

The Pentagon has enlisted the help of AI companies to speed up and enhance war planning, entering a partnership with Anthropic in 2024 that came crumbling down last week thanks to disagreements over use of the company’s AI model, Claude. But OpenAI quickly inked a deal with the Pentagon, and Elon Musk’s xAI reached a deal to use the company’s AI model, Grok, in classified systems. The U.S. Army also uses data-mining firm Palantir’s software for AI-enabled insights for decision-making purposes.

AI in the battlefield

Jones said the U.S. Air Force has used the “speed of thought” as a benchmark for the pace of decision-making for years. He said the time elapsed from collecting intelligence, such as aerial reconnaissance, to executing a bombing mission could take up to six months during WWII and the Vietnam War. AI has significantly compressed that timeline.

The key role of AI tools in the war room is to quickly analyze vast amounts of data. “We’re talking terabytes and terabytes and terabytes of data,” Jones said, “everything from aerial imagery, human intelligence, internet intelligence, mobile phone tracking, anything and everything.”

Dr. Amir Husain, co-author of Hyperwar: Conflict and Competition in the AI Century, said that AI is being used to compress the U.S. military’s decision-making framework, known as the OODA loop—an acronym for observe, orient, decide, and act. He said AI is already playing a significant role in observation, or in interpreting satellite and electronic data, tactical-level decision-making, and the “act” phase, specifically through autonomous drones that must operate without human guidance when signals are jammed. Some of those drones are actually copycats of Iran’s own autonomous Shahed drones.

AI has also appeared on other battlefields. Israel reportedly used AI to identify Hamas targets during the Israel-Hamas war. And autonomous drones are on the frontlines in the Russia-Ukraine war, with both Russia and Ukraine employing some variation of autonomous technology.

Multiplying risks

However, Jones flagged a number of concerns around AI-enabled warfare. “The problem when you add AI to that is you multiply, by orders of magnitude I would argue, the degrees of error,” Jones said.

To be sure, Jones said, human error exists with or without AI technology, citing the 2003 U.S. invasion of Iraq as a conflict built upon flawed intelligence gathering. But he said AI could exacerbate such mistakes thanks to the magnitude of data the technology analyzes.

There’s also a string of ethical questions AI warfare raises, mainly around the question of accountability, something Husain said the Geneva Convention and the laws of armed conflict already require states to comply with. With AI blurring the lines between machine and human-level decision-making, he said the international community must ensure human responsibility is assigned to all actions on the battlefield.

“The laws of armed conflict require us to blame the person,” Husain said. “The person has to be accountable no matter what level of automation is used in the battlefield.”

Share.
Exit mobile version