Anthropic Loses Bid to Block Pentagon Blacklisting Amid Ongoing Legal Battle

Anthropic has lost a key legal attempt to temporarily block its blacklisting by the U.S. Department of Defense (DoD), after a federal appeals court in Washington, D.C. ruled in favor of the government.

The court denied Anthropic’s request to pause the Pentagon’s designation of the company as a “supply chain risk” while its broader lawsuit proceeds. In its decision, the court said the balance of interest favored national security concerns over potential financial harm to the company.

Despite the ruling, a separate decision from a San Francisco federal court last month granted Anthropic a preliminary injunction preventing enforcement of a broader ban on its Claude AI model. As a result, Anthropic can continue working with other U.S. government agencies, though it remains excluded from DoD contracts.

The Pentagon labeled Anthropic a supply chain risk in March, requiring defense contractors to certify they are not using the company’s Claude models in military-related work. This effectively blocks Anthropic from participating in defense projects while the case is ongoing.

Anthropic has argued that the designation is unconstitutional and retaliatory, claiming it violates due process. However, the appeals court found that the company’s potential harm is largely financial and that there was insufficient evidence its free speech rights had been restricted.

The dispute follows tensions between Anthropic and U.S. defense officials over how its AI technology would be used. The DoD reportedly sought broad access to Anthropic’s models for military purposes, while the company pushed for restrictions to prevent use in autonomous weapons or domestic surveillance.

The conflict escalated earlier this year when the Pentagon formally designated Anthropic as a supply chain risk, a classification historically reserved for foreign adversaries. The move came despite Anthropic previously securing a $200 million defense contract and deploying its technology across certain government systems.

Anthropic said it remains confident the courts will ultimately rule in its favor and emphasized its commitment to working with the government to ensure safe and responsible AI deployment.

The case highlights growing tensions between AI companies and governments over control, ethics, and national security—issues that are likely to shape the future of AI regulation and deployment.