What AI Models for War Actually Look Like
Smack Technologies is building specialized AI models for battlefield planning and execution, claiming they will soon outperform Anthropic’s Claude in military operations, even as the latter debates ethical limits on such uses.
While major AI labs like Anthropic engage in internal discussions about restricting military applications of their technology, startup Smack Technologies is taking a different path. The company announced a $32 million funding round this week and is actively training models designed specifically for planning and executing military operations, according to a WIRED report.
The development highlights a growing divide in the AI industry over the use of frontier models in warfare. Smack Technologies positions its systems as capable of surpassing Anthropic’s Claude in operational military contexts, focusing on practical planning tasks rather than broad ethical constraints.
Smack’s Military-Focused AI Push
Smack Technologies is developing AI models explicitly tailored for defense and battlefield scenarios. According to the WIRED article, these models are being trained to handle complex operational planning — tasks that go beyond general intelligence benchmarks and into domain-specific military decision support.
The company’s $32 million raise signals strong investor interest in defense-oriented AI at a time when many established players remain cautious. Smack claims its forthcoming models will exceed Claude’s performance specifically in planning and executing military operations, though independent verification of these claims is not yet available in the public domain.
This approach contrasts sharply with Anthropic, which has publicly debated and, in some cases, implemented limits on military uses of its Claude models. The divergence illustrates two competing philosophies in the AI sector: one prioritizing rapid deployment into high-stakes government and defense applications, and the other emphasizing safety, ethics, and usage restrictions.
Broader Context of AI in Military Operations
The integration of large language models into military workflows is already underway at various commands. For example, U.S. Central Command (CENTCOM) has been applying “Project Maven,” originally announced in 2017, to incorporate large language models — including Anthropic’s Claude and, more recently, models from OpenAI and xAI — primarily for summarizing raw intelligence data.
However, concerns about AI in warfare extend beyond planning tools. Separate reports and simulations have highlighted risks, including instances where leading models from OpenAI, Anthropic, and Google reportedly recommended nuclear options in a high percentage of war-game scenarios. These findings, discussed in academic and online forums, underscore the high-stakes nature of deploying advanced AI in conflict environments.
Smack Technologies’ explicit focus on military superiority in planning capabilities places it at the center of these debates. The company’s models are reportedly being optimized for operational effectiveness rather than general-purpose conversational abilities.
Impact on Developers, Defense Contractors, and the AI Industry
For developers and startups, Smack’s announcement validates a specialized, domain-focused approach to building AI. Rather than competing directly on general benchmarks, companies can target high-value verticals such as defense, where performance on specific tasks may command significant funding and contracts.
Defense organizations stand to gain from tools that can accelerate operational planning, potentially reducing human cognitive load in complex battlefield environments. However, this also raises questions about accountability, oversight, and the potential for escalation when AI systems play larger roles in military decision-making.
Within the broader AI industry, the news intensifies the split between “frontier labs” focused on safety research and commercial defense AI startups. Anthropic’s ongoing internal debates about military use reflect wider industry conversations about dual-use technology and export controls.
What’s Next
Smack Technologies has not publicly disclosed a specific timeline for releasing its military planning models beyond stating they will “soon” surpass Claude in relevant capabilities. The company is expected to provide more technical details as it progresses through its post-funding development phase.
Watchers of the sector anticipate increased competition in defense AI, with both established players and new entrants seeking government partnerships. How Anthropic and other labs respond — whether by tightening restrictions or adapting their own offerings — will likely shape the competitive landscape in the coming months.
The ethical and policy implications of purpose-built military AI models are also likely to draw attention from regulators and advocacy groups as capabilities advance.

