AI-Killing Time?

Could the U.S. attack on Iran have resulted in the first major AI war crime?

06.05.2026

A thousand targets in 24 hours! What Jeff Bezos left of the Washington Post (WaPo) editorial staff after his last purge1 reported on March 4th about the enormous military success of AI systems.2 The attack on Iran represents the “first major war operation” in which the Pentagon is comprehensively using AI-supported platforms, according to WaPo. Two interwoven systems are being deployed: Palantir provides the Maven Smart System, which can evaluate an “astonishing amount of classified data from satellites, surveillance, and other intelligence sources” in real time to identify and prioritize targets, together with the AI tool Claude from Anthropic.

Maven and Claude proposed “hundreds of targets” within a very short time during the planning phase of the war, WaPo explained, citing insider information, with the AI-powered information systems also delivering “precise coordinates.” The weeks-long planning time of a military campaign had been transformed into a “real-time operation.” The Maven system intertwined with Claude was integrated into the Pentagon’s military machine starting in 2024, with the Trump administration rapidly generalizing its use so that around 20,000 U.S. military personnel now have access to it. In addition to target identification, it also serves logistics monitoring and intelligence evaluation.

But it is not only the shortening of the planning time that is important. Reaction time during the war is also a decisive factor. The Pentagon’s complex AI system allows the military machine of the United States to respond very quickly to developments. The faster the military can respond to the localization and precise identification of a potential target with a correspondingly calibrated strike, the more efficiently the opponent’s military potential can be reduced. The entire decision chain can be enormously shortened; one could speak of a “kill time” that, through the use of artificial intelligence, can be massively reduced during a comprehensive bombardment with hundreds of targets in Iran.

The AI-supported real-time war seems almost within reach. Maven represents a “paradigm shift,” a security expert told WaPo, because AI now “enables the U.S. military to develop target packages at machine speed rather than human speed.” However, there are also “drawbacks,” since the AI makes mistakes, which is why “we need humans who check the output of generative AI when it comes to questions of life and death.”

Mass-murderous hallucination?

A thousand targets in 24 hours. And then there are more than 100 children’s bodies in the bombed girls’ school in Minab in southern Iran,3 who were killed in an airstrike on Saturday, February 28. February 28—that is precisely the first day of the war on which the Pentagon’s AI systems were able to deliver about thousand targets to the U.S. war machine within a very short time, as WaPo proudly emphasized. The school was located in the immediate vicinity of a base of the Iranian Revolutionary Guards; it was separated from it by a fence. But in images from commercial satellites that have been evaluated by U.S. media,4 it is clearly visible that this is not “collateral damage.” The school building was directly and precisely hit. In total, seven impacts can be identified on the complex. A clinic was also hit, which was likewise separated from the military base grounds by a wall.

According to media assessments, the cause of this war crime could lie in “outdated data.” The entire area had previously been used by the Iranian Revolutionary Guards; the school was established and separated off “between 2013 and 2016,” and the clinic between „2022 and 2024“. The U.S. military’s target designation would therefore have been based on very old data.

According to the aforementioned WaPo report, it is also clear “who” was most likely responsible: Maven together with Claude were responsible for target designation, with two possibilities in question. First, the Pentagon may indeed have been working with outdated material more than ten years old. Or, second, the designation of the school as a target was one of those “hallucinations” to which all large language models (LLMs) like Claude inevitably tend. All LLMs operate with statistical probabilities that become ingrained over the course of their training; in effect they are auto-completion solutions operating with enormous computing capacity. And how likely is it that next to a naval base of the Iranian Revolutionary Guard there is a girls’ school?

Regardless of the concrete cause of the error, this incident casts a revealing light on the verification of AI-generated targets by actual humans, which Pentagon insiders mentioned to WaPo. The pressure to deliver as many targets as possible as quickly as possible in order to achieve the desired “shock and awe” effect at the start of the war must have been enormous. Accordingly, the review would then be correspondingly superficial. Especially since Pete Hegseth, the current head of the Pentagon, is an outright fascist (similar to Stephen Miller) who spouts rhetoric aimed at dehumanization5 – and who deliberately undermines the Pentagon’s safety measures, rules of engagement, and protocols.6

The conflict between the Pentagon and Anthropic, the makers of Claude, escalated only a few days before the start of the war. Fully autonomous combat systems and mass surveillance of U.S. citizens formed the official points of dispute. It is also quite possible that the AI startup realized what the system would be misused for, even though for now a human is still pulling the trigger. Formally, the AI only proposes the targets that are to be attacked; a human must (still) press the button. This formality also constitutes the loophole that enables the use of AI systems for target designation for the U.S. war machine. But fully autonomous Terminator systems are only a matter of time. The designation of Anthropic as a „supply chain risk“ by the Pentagon and the immediate conclusion of contracts with OpenAI and Musk’s Grok already illustrate that the last obstacles will quickly fall here as soon as the technical prerequisites for autonomous AI weapons systems are in place.


1  https://www.theguardian.com/media/2026/feb/04/washington-post-layoffs

2  https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/

3  https://www.theguardian.com/global-development/2026/mar/03/minab-school-bombing-how-the-worst-mass-casualty-event-of-the-iran-war-unfolded-a-visual-guide

4  https://www.npr.org/2026/03/04/nx-s1-5735801/satellite-imagery-shows-strike-that-destroyed-iranian-school-was-more-extensive-than-first-reported

5  https://x.com/Acyn/status/2029182895013916898

6  https://x.com/Acyn/status/2028459380132446599

Nach oben scrollen