Pentagon Uses Palantir and Anthropic AI to Strike 1,000 Iran Targets in 24 Hours
The U.S. military leveraged AI tools from Palantir and Anthropic to identify and prioritize roughly 1,000 targets in Iran within the first 24 hours of its military campaign, according to multiple reports. The system combined Palantir’s Maven platform with Anthropic’s Claude large language model, enabling rapid target generation and missile strikes that reportedly included approximately 900 missiles launched in the initial 12 hours. This operation, which occurred amid escalating conflict with Iran, highlights the Pentagon’s increasing reliance on commercial AI for time-sensitive combat decisions.
The integration paired Palantir’s established Maven system — originally developed for analyzing drone footage and intelligence data — with Anthropic’s Claude AI. According to reporting from The Washington Post and Moneycontrol, the AI-assisted workflow helped the Pentagon process vast amounts of intelligence to generate and rank potential targets at a speed not previously possible with traditional methods.
Reports indicate the AI tools contributed to strikes on high-value sites, including a compound belonging to Iran’s Supreme Leader Ayatollah Ali Khamenei, an event that multiple outlets linked to his death. The Guardian, cited in several stories, reported the system identified about 1,000 possible targets within 24 hours, demonstrating a significant acceleration in military decision-making cycles.
This development comes despite a reported feud between the Pentagon and Anthropic. Anthropic had previously sought restrictions on the use of its technology for fully autonomous military targeting, leading to temporary tensions. However, the company’s Claude model is now being used through Anthropic’s partnership with Palantir, as noted by Responsible Statecraft. Neither Palantir nor Anthropic has issued official statements confirming the specifics of the deployment in the Iran campaign.
Technical Integration and Military Context
Palantir’s Maven system has been a cornerstone of the Department of Defense’s AI initiatives for several years, evolving from Project Maven, the Pentagon’s first major AI effort focused on computer vision for intelligence, surveillance and reconnaissance. By embedding Anthropic’s Claude, the platform gains enhanced natural language processing and reasoning capabilities that can synthesize disparate intelligence sources — satellite imagery, signals intelligence, open-source data and human reports — into prioritized target lists.
The reported capability allowed U.S. forces to compress what traditionally might have taken days or weeks of manual analysis into a 24-hour window. India Today reported that around 900 missiles were launched in the first 12 hours alone, suggesting the AI system played a role in both target development and operational tempo.
This is not the first time commercial AI has appeared in U.S. military operations, but the scale and speed claimed in the Iran strikes represent a notable milestone. The involvement of a frontier AI model from Anthropic, one of the leading U.S. AI labs alongside OpenAI and Google DeepMind, underscores how quickly advanced commercial technology is moving into operational use.
Impact on Defense AI Adoption
For defense contractors and AI companies, the episode illustrates both opportunity and risk. Palantir has long positioned itself as a key partner to the Pentagon, and its integration with Anthropic’s technology expands its offerings in the growing market for AI-enabled command and control systems. Anthropic, founded by former OpenAI executives and known for its focus on AI safety and constitutional principles, now finds its flagship Claude model directly tied to high-profile combat operations.
The reports also highlight ongoing debates about appropriate use of AI in lethal targeting. Anthropic’s earlier demands for restrictions on fully autonomous targeting reflect broader industry discussions about human oversight, explainability and accountability in AI-assisted weapons systems. Critics, including those cited in Responsible Statecraft, have raised concerns about the speed of AI-driven decisions potentially reducing time for human judgment.
For the broader AI industry, the story reinforces the reality that leading models are no longer confined to civilian applications. As competition intensifies among U.S. AI firms, partnerships with defense contractors like Palantir offer both substantial revenue potential and reputational considerations.
What's Next
The Pentagon has not released an official timeline or detailed technical report on the AI system’s performance in the Iran operation. Further information may emerge through congressional oversight, after-action reviews or additional investigative reporting.
Industry observers expect continued investment in AI for targeting and intelligence analysis, with the Department of Defense likely to expand programs that integrate large language models with existing platforms like Maven. Future developments could include more sophisticated multimodal AI systems capable of real-time reasoning across video, text and sensor data.
Questions remain about the exact level of autonomy granted to the AI system, the degree of human review applied to generated targets, and how performance will be evaluated. Both Palantir and Anthropic are expected to face increased scrutiny regarding their defense partnerships in the coming months.
Sources
- How Palantir and Anthropic AI helped the US hit 1,000 Iran targets in 24 hours
- Pentagon leverages AI in Iran strikes amid feud with Anthropic
- Iran war: 1000 targets in 24 hours, how US military used Anthropic AI tool Claude embedded in Maven
- US used 'Claude' to strike over 1000 targets in first 24 hours of war
- US-Israel war on Iran: How Anthropic’s Claude AI helped US strike 1,000 targets in Iran within 24 hours of war

