Pentagon Flags Anthropic As Supply Chain Risk Amid Iran Use

The Pentagon has formally notified Anthropic that it considers the AI company a supply chain risk, creating a new fault line between the U.S. defense establishment and a leading developer of generative artificial intelligence even as Anthropic’s Claude has been reported as used in military operations involving Iran.
The development centers on Anthropic, the maker of the Claude chatbot and related AI models, and the U.S. Department of Defense. Recent coverage from CNBC described Anthropic as being “officially told” by the DOD that it is a supply chain risk. Separate reporting from Reuters said defense contractors, including Lockheed, have been removing Anthropic’s AI after a Trump-era ban.
The dispute has immediate implications because Claude has been tied in public reporting to U.S. military targeting workflows. A widely circulated account said the military leveraged advanced AI during operations involving Iran, pairing Anthropic’s Claude with the military’s Maven Smart System to help suggest targets and provide location coordinates. The reports did not provide additional official documentation in the context provided here.
Beyond the immediate question of whether specific tools can continue to be used, the Pentagon’s supply chain determination affects how contractors procure, integrate, and sustain software and AI systems. When the Defense Department flags a company as a supply chain risk, defense primes and subcontractors typically reassess deployments to avoid compliance and security issues, particularly in systems that touch sensitive networks or mission planning.
This matters for operational continuity. Reuters also reported that AI contract restrictions could threaten military missions, and multiple outlets have described contractors moving to remove Claude from projects linked to defense work. If widely used AI capabilities are pulled from existing stacks, programs may face delays as teams seek replacements, perform new testing, and revise internal approvals for model access and data handling.
The situation also raises governance questions about how the U.S. government evaluates and communicates risk in fast-moving AI deployments. CNBC reported there are “unresolved questions” hanging over what it called the Anthropic–Pentagon fracas, describing the situation as puzzling. That uncertainty can ripple through procurement, as vendors and program managers try to understand what is permitted, what is prohibited, and what standards will be applied going forward.
What happens next will largely depend on how the Defense Department’s designation is implemented across contracts and whether contractors standardize on alternative AI providers. Contractors that have begun removing Anthropic tools may accelerate those efforts, while others could pause new integrations until guidance is clearer. At the same time, the Pentagon may face pressure to ensure that restrictions do not undercut mission needs if AI-enabled systems have become embedded in operational workflows.
For now, the Pentagon’s message to Anthropic is clear: even with reported use of Claude in sensitive contexts, the company is being treated as a supply chain risk—an assessment that could reshape which AI systems the U.S. defense sector is willing, or able, to rely on.
