Anthropic CEO Dario Amodei Calls OpenAI Military Claims Lies

Anthropic CEO Dario Amodei Calls OpenAI Military Claims Lies

Anthropic CEO Dario Amodei has accused OpenAI of dishonesty in how it has described a military-related deal, calling OpenAI’s messaging “straight up lies,” according to a report.

The comments, attributed to Amodei, were reported by TechCrunch. They come amid a widening dispute over how major artificial intelligence companies engage with the U.S. defense establishment and how those relationships are communicated to the public.

Anthropic, the maker of the Claude AI assistant, has been navigating its own turbulence tied to defense work. In recent days, multiple outlets have reported growing fallout involving Claude’s use in defense settings, including decisions by some defense tech companies to drop Claude after the Pentagon labeled Anthropic as a supply-chain risk, according to CNBC.

Separately, TechCrunch reported that tech workers have urged the Department of Defense and Congress to withdraw Anthropic’s designation as a supply-chain risk. That label can carry significant consequences in the defense sector, where contractors and subcontractors often rely on approved tools and vendors to meet security and procurement requirements.

The dispute between AI rivals matters because it underscores how quickly the industry’s biggest players are being pulled into national security and military procurement—and how sensitive public messaging can become when taxpayer-funded contracts and battlefield-adjacent applications are involved.

It also highlights the reputational and commercial stakes for companies that sell general-purpose AI systems while trying to set and enforce boundaries on high-risk use cases. When executive leaders publicly challenge one another’s credibility around military relationships, the friction can spill into customer decisions, developer sentiment, and policy scrutiny.

The controversy is unfolding at a moment when Anthropic is also dealing with product reliability concerns. AOL.com reported a major outage in which Claude was down, a reminder that the company is simultaneously managing both high-profile governance issues and day-to-day operational performance as adoption grows.

Meanwhile, Fortune reported that Anthropic’s Claude overtook ChatGPT in the App Store as users boycotted OpenAI over a Pentagon contract described as $200 million. That report reflects how defense-linked deals can become a flashpoint not only for policymakers and contractors, but also for consumer users and the broader tech workforce.

What happens next is likely to involve continued pressure from employees, customers, and government stakeholders seeking clarity on the terms, safeguards, and communications surrounding defense-related AI work. The Pentagon’s supply-chain risk label for Anthropic, and calls to reverse it, could also shape near-term access to defense customers and partnerships tied to Claude.

For OpenAI and Anthropic, the episode adds to a fast-moving competition where technical capability is only one part of the battle; trust, transparency, and the handling of military engagement are increasingly central to how AI leaders are judged.

As the defense sector’s reliance on commercial AI tools expands, disputes like this are poised to draw sharper scrutiny from Washington, contractors, and the public alike.

Similar Posts