Anthropic Sues Defense Department Over Supply-Chain Risk Designation
Key Facts
- What: Anthropic filed two complaints challenging the Department of Defense’s designation of the company as a supply-chain risk after the Pentagon labeled it as such.
- When: Lawsuits filed on Monday following the Pentagon’s formal designation issued last Thursday.
- Where: Complaints filed in San Francisco federal court and the U.S. Court of Appeals for the D.C. Circuit.
- Why: Anthropic objects to unrestricted military use of its Claude AI models for mass surveillance of Americans or fully autonomous lethal weapons, citing AI safety and constitutional protections.
- Impact: The designation requires government contractors to certify they do not use Anthropic’s models, threatening the company’s federal business and triggering termination of its OneGov contract by the General Services Administration.
Lead paragraph
Anthropic, the maker of the Claude AI models, sued the Department of Defense on Monday after the Pentagon labeled the company a supply-chain risk, escalating a high-stakes dispute over the use of artificial intelligence in national security, surveillance, and autonomous weapons. The AI firm filed complaints in federal court in San Francisco and the U.S. Court of Appeals for the D.C. Circuit, arguing that the designation is “unprecedented and unlawful” retaliation for its public stance on AI safety limits. Defense Secretary Pete Hegseth and President Trump have criticized Anthropic and CEO Dario Amodei as “woke” and “radical” for refusing to allow their technology to be used for mass domestic surveillance or fully autonomous weapons without human oversight.
Dispute Over AI Boundaries
The conflict centers on Anthropic’s two firm red lines: it does not want its AI systems deployed for mass surveillance of American citizens, and it believes its current technology is not ready to power fully autonomous weapons that make targeting and firing decisions without human involvement. According to the lawsuit, these positions constitute protected speech about “the limitations of its own AI services and important issues of AI safety.”
Defense Secretary Pete Hegseth has countered that the Pentagon should have access to AI systems for “any lawful purpose” and that a private contractor should not hold veto power over military judgments. The administration, including President Trump, publicly criticized Anthropic’s calls for stronger AI safety and transparency measures, labeling the company and its leadership as overly cautious or politically motivated.
A supply-chain risk designation is typically reserved for entities posing threats from foreign adversaries. Once applied, it compels any company or agency working with the Pentagon to certify that it does not use the flagged vendor’s models. While several private companies continue to work with Anthropic, the designation is expected to significantly curtail the firm’s government-related business.
Legal Arguments and Procedural Claims
In its San Francisco federal court complaint, Anthropic argues that the Constitution does not permit the government to wield its power to punish a company for protected speech. The lawsuit states the government does not have to agree with Anthropic’s views or purchase its products, but it cannot use state authority to suppress the company’s expression on critical AI policy matters.
Anthropic further contends that “no federal statute authorizes the actions taken here.” The firm claims the Defense Department issued the supply-chain risk designation without following procedures required by Congress. These procedures generally include conducting a formal risk assessment, notifying the targeted company and allowing it to respond, issuing a written national-security determination, and notifying Congress before excluding a vendor from federal supply chains.
The company also accuses the president of exceeding authority granted by Congress. After Amodei publicly stated he would not compromise on the company’s safety red lines, the administration directed every federal agency to immediately stop using Anthropic’s technology. This led the General Services Administration to terminate Anthropic’s “OneGov” contract, which had made Claude models available across all three branches of the federal government.
The lawsuit describes the actions as an attempt to “destroy the economic value created by one of the world’s fastest-growing private companies.” It warns of “immediate and irreparable harm” not only to Anthropic but to others whose speech may be chilled, to stakeholders benefiting from the company’s growth, and to the global public that deserves open debate on AI’s implications for warfare and surveillance.
Requested Relief and Company Statement
Anthropic is asking the court to immediately stay the Defense Department’s supply-chain risk designation while the case proceeds. It ultimately seeks to invalidate the designation and permanently block the government from enforcing it. The company has also filed a separate complaint in the D.C. Circuit Court of Appeals seeking formal review of the determination, consistent with federal procurement law.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.”
Broader Context and Industry Implications
The lawsuit highlights growing tensions between leading AI companies and the U.S. government over the responsible development and deployment of advanced artificial intelligence. Anthropic has positioned itself as a leader in AI safety research, emphasizing constitutional AI principles and the need for careful consideration of high-stakes applications.
This dispute occurs amid rapid advancement in large language models and increasing interest from defense and intelligence communities in leveraging AI for national security purposes. Other major AI firms have taken varying approaches to government contracts, with some more willing to support military applications.
The Pentagon’s move to blacklist Anthropic through a supply-chain risk designation represents an aggressive use of procurement authority against a domestic technology provider. Legal experts will closely watch whether courts accept the administration’s national security justifications or side with Anthropic’s claims of retaliatory action and procedural violations.
Impact on Developers, Government, and the AI Industry
For developers and enterprises that rely on Anthropic’s Claude models, the designation creates immediate uncertainty around federal compliance. Government contractors must now evaluate whether continued use of Anthropic technology could jeopardize their Pentagon-related work. This may accelerate shifts toward alternative AI providers for defense-adjacent projects.
The case could have significant implications for how the U.S. government engages with the private AI sector. A ruling in favor of the Defense Department might embolden further use of supply-chain risk designations against companies expressing safety concerns. Conversely, a decision supporting Anthropic could reinforce protections for corporate speech on technology policy and establish clearer procedural safeguards before imposing such restrictions.
The AI industry is watching closely as this conflict pits national security imperatives against calls for ethical boundaries on AI use in surveillance and lethal autonomous weapons. Anthropic’s stance reflects broader debates about meaningful human control in AI-enabled military systems and the prevention of mass domestic surveillance capabilities.
What's Next
The federal courts will now consider Anthropic’s request for an immediate stay of the supply-chain risk designation. The case is likely to proceed through preliminary injunction hearings before addressing the merits of the constitutional and statutory claims. Resolution could take months or longer, depending on appeals.
Anthropic has indicated it remains open to dialogue with the government while pursuing judicial review. The outcome may influence future negotiations between AI companies and defense agencies regarding acceptable use cases and safety guardrails.
The lawsuit also raises questions about potential legislative responses. Congress may examine whether current procurement and national security statutes provide adequate procedures and oversight when the government seeks to restrict domestic technology providers on policy grounds.
This case could set important precedents for the relationship between frontier AI labs and the U.S. government at a time when AI capabilities are advancing rapidly and military interest in these technologies continues to grow.
Sources
- Anthropic sues Defense Department over supply chain risk designation | TechCrunch
- Anthropic Sues Department of Defense Over Supply-Chain-Risk Designation | WIRED
- AI firm Anthropic sues US defense department over blacklisting | The Guardian
- Anthropic Sues Department of Defense Over ‘Supply Chain Risk’ Label - The New York Times
- Anthropic sues Trump administration over Pentagon blacklist

