How the Anthropic-Pentagon dispute – and the transatlantic pressures that surround it – expose a political gap in Europe's approach to military AI governance.
In early January 2026, US special forces captured Venezuelan President Nicolás Maduro. Just weeks later, the United States and Israel launched strikes on Iran, reportedly hitting over 1,000 targets in just the first 24 hours. Both operations relied on Claude, the AI Chatbot made by Anthropic, which is embedded in the military’s Maven Smart System to process intelligence data to support operations. Claude was suggesting target coordinates and prioritizing strikes in real time, thereby turning weeks of battle planning into something that happened at machine speed.
At the same time, Anthropic and the Pentagon were locked in a very public fight. The military wanted Anthropic to sign a contract permitting Claude to be used for “any lawful use” while Anthropic’s CEO Dario Amodei insisted on limits for mass domestic surveillance and fully autonomous weapons. The Pentagon responded by formally classifying Anthropic a “supply chain risk,” which would require government contractors to cease using Anthropic for government work. That designation had only been used previously for national security concerns over companies owned abroad, making its use against the US-based firm highly unusual. The “supply chain risk” label effectively blocks government contractors from working with the company, which can lead to serious financial and reputational consequences. Washington policy experts have widely reacted with shock and a number of companies have lent support to Anthropic in opposition to the effort to coerce an American company, framing the Pentagon’s attempt as overreach. Open-AI stepped in as a replacement for Anthropic by agreeing to allow the military “any lawful use” of its products. On March 26, a federal judge temporarily blocked Anthropic’s designation as a “supply chain risk” pending litigation, finding that the government had taken retaliatory action that likely violated the law.
This incident showcased the contradictions and dangers of AI usage in national security and military settings. The tool the Pentagon attempted to officially ban was simultaneously being used to run an active military campaign. The company’s contractual preference was one of the only limits on how the military could deploy Claude. This dispute has been widely read as a story about American governance failures. Some within the US have suggested the ordeal exposes a US legal gap to ensure the military’s safe use of AI. But it should also prompt hard questions in Brussels and other European capitals: at a time when Europe appears increasingly responsible for its own defense, and is making investments in military technology, does Europe actually have a better answer on how to integrate AI as a tool for militaries without sacrificing values?
The promise and limits of the EU’s risk-based framework
The EU’s answer to the risks of AI was the AI Act, alongside the General Data Protection Regulation (GDPR). By current global standards, this is one of the most ambitious attempts any jurisdiction has made to put democratic values at the center of AI governance. However, the implementation of the AI Act has been slowed down by ongoing political negotiations, with processes in place to delay certain obligations and simplify compliance mechanisms. The problem however is that the framework quietly does not cover the military use of AI. The exemption leaves EU-funded defense entirely outside its risk framework, with no mechanism to address what happens when those systems might move over into civilian life. In general, the AI Act also treats chatbots as low risk despite their potential for harm. Another limit emerges under the implementation of the GDPR, which so far has allowed agreements like the EU-US transatlantic data privacy framework to limit its effectiveness. Under US law, the AI enabled surveillance of non-Americans abroad remains explicitly authorized. Even Anthropic’s contract would have not prevented such activity. This highlights how weak international agreements can undermine the protections the law intends to provide. These are serious gaps, but also leave a dimension that remains underexplored: Why has Europe been so reluctant to politically confront them?
The transatlantic bind: defense dependency
Europe’s right-based and precautionary approach to AI governance stands in contrast to a more innovation-driven and strategically assertive model in the US. The Anthropic-Pentagon dispute sits at the intersection of a few pressures from Washington that constrain Europe’s room to maneuver on military AI governance. The first is on defense spending. The Trump administration has made larger European contributions to NATO a central demand, and has recently threatened to leave the alliance entirely due to Europe’s reluctance to engage Iran. European governments have increased defense budgets, yet they are still seeking to maintain Washington’s goodwill. The near silence of European governments on the Anthropic-Pentagon dispute is in part a reflection of their reluctance to further weaken their alliance.
The second pressure is for Europe to import more from the US, including AI and military technologies. An executive order signed in July 2025 by President Trump, established an American AI Exports Program to promote American AI packages to allied countries with the explicit goal of ensuring that American AI governance models and technology are adopted. If American AI systems become more integrated into European defense, as intended by the export program, the EU will have even less space to control AI standards. As the Electronic Frontier Foundation puts it: “the state of your privacy is being decided by contract negotiations between giant tech companies and the US government.” Europeans are more exposed than Americans as their own political systems have not effectively weighed in on debates about the future of AI in conflict.
The EU’s blind spot
In the United States, the situation arose because the military has been treated as a separate legal domain, with distinct rules and limited transparency, so that a private company's contractual limits or red lines can be some of the only safeguards. The EU has simply not faced the moment yet in which that assumption is tested. While European states have already encountered elements of AI-enabled warfare such as in the context of the Russia-Ukraine war, this has not been translated into actual policy.
Europe’s defense AI budgets have been growing fast and the United States is promoting its AI infrastructure as a fundamental piece of that expansion. The Anthropic-Pentagon dispute has significantly damaged trust in American technology at precisely a point where many countries were already re-examining their deep US tech dependencies. European institutions have yet to publicly reckon with the fact that this dispute makes their US dependencies even harder to justify, particularly as debates over digital sovereignty increase and European governments explore shifting away from US based cloud providers.
A political choice, not a technical footnote
The military exemption in the EU AI Act was a political decision that has not yet been subjected to serious democratic scrutiny in the era of AI warfare. The use of AI is becoming harder to ignore as Europe’s defense capabilities increasingly incorporate these technologies, while the EU remains highly dependent on American systems. The dispute highlights that Europe’s framework does not resolve key issues, particularly how to govern military AI and how far it should rely on US systems.
There are a few lessons the EU can take from the Anthropic-Pentagon dispute. First, the EU should review the AI Act military exemption and consider oversight mechanisms for its own use of AI for defense that considers dual-use technologies and that tracks systems across their full lifecycle, including any transitions from military to civilian use. Second, in any transatlantic AI or data agreement, particularly as the US AI export program intensifies, the EU should insist on meaningful protections for Europeans.
The EU has built one of the most ambitious AI and data governance frameworks in the world. Yet, the Anthropic case illustrates similar trade-offs the EU has faced as it has regulated digital technology: exemptions are quickly exposed and private companies are maintaining protections only as long as convenient. Europe has so far made the same choice. As the EU reviews the AI Act, the question is whether it will once again leave the decisive power in the hands of private technology companies and the United States.