Anthropic CEO Vows Court Fight Over Trump Supply Chain Label

Anthropic CEO Dario Amodei said the artificial intelligence company has “no choice” but to challenge the Trump administration’s designation of Anthropic as a supply chain risk in court, setting up a legal fight with the federal government that could affect the company’s ability to work with the Pentagon.
The comments came as Anthropic said it will challenge the Pentagon’s supply chain risk designation. The move puts one of the best-known U.S. AI developers into direct conflict with the Defense Department over whether its technology can be used in military settings and, more broadly, how national security agencies will vet and restrict emerging AI suppliers.
Anthropic is the maker of the Claude AI system and has been pitching the ease of switching to its tools, according to related reports. But the designation by the Pentagon carries serious consequences, potentially limiting federal adoption and complicating partnerships tied to defense work and other government contracts.
At the center of the dispute is the Trump administration’s action treating Anthropic as a supply chain risk. Amodei’s statement frames the company’s legal challenge as a necessity rather than a strategic choice, suggesting Anthropic views the designation as a threat to its ability to operate in key markets and to participate in sensitive government programs.
The development matters because the federal government has become a major customer and regulator for advanced AI. A supply chain risk label, especially in the defense context, can shape which companies are eligible to provide software and services, influence procurement decisions across agencies, and signal to the broader market that a vendor’s products may be restricted or scrutinized.
The dispute also lands amid heightened attention on the Pentagon’s reliance on advanced AI systems. One related report described the U.S. military using advanced AI to strike 1,000 targets in Iran within 24 hours, underscoring how deeply AI tools can be woven into operational planning and targeting workflows. As a result, decisions to sever ties with an AI provider, or to restrict one, can have ripple effects for military readiness, contracting, and the pace of AI adoption.
Anthropic’s court challenge will test how such designations are applied and contested, and how much discretion the government has in labeling a technology supplier as a risk. It also comes as other outlets have highlighted an emerging “AI battle” around who controls the most powerful military technology, pointing to wider competition among companies to become trusted defense partners.
Next, Anthropic is expected to pursue its challenge in court, seeking to overturn or limit the Pentagon’s supply chain risk designation. The litigation will likely focus on the legal basis for the designation and the process used to apply it, while the government defends its authority to restrict suppliers it deems risky.
For Anthropic, the outcome could determine its access to federal work and its standing in the defense technology ecosystem. For the Pentagon, the case could shape how aggressively it can police AI vendors at a time when military demand for advanced AI is growing.
The court fight now looms as a high-stakes test of government power over the AI supply chain and a company’s ability to contest national security-driven restrictions.
