Home Investing News Anthropic-Pentagon clash raises key question: who is to blame if AI kills?

Anthropic-Pentagon clash raises key question: who is to blame if AI kills?

by

What if AI kills a civilian?

That’s no longer a speculative headline — it’s a question the Pentagon quietly brushed past in its new doctrine.

The US Department of Defense’s January 9, 2026, AI strategy contains a single line that may outlast the memo itself:

“We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.”

It’s one of the clearest official acknowledgements yet that, inside the US defence establishment, the priority has shifted.

The goal is no longer to slow AI until accountability catches up, but to move faster — and deal with the consequences later.

That logic leapt off the page and into reality within weeks, when Anthropic clashed with the Pentagon over military-use restrictions.

The company was soon blacklisted after refusing to adopt broader “any lawful use” terms.

The memo that triggered the debate is more than bureaucratic jargon.

It orders the Department to become an “AI-first” warfighting force, to deploy frontier models into military systems within 30 days of public release, and to weave AI “from campaign planning to kill chain execution.” 

The document even instructs agencies to use models “free from usage policy constraints” and to hardwire “any lawful use” language into all AI service contracts within six months.

This isn’t a distant debate about future ethics. It’s a procurement order, a policy shift, and a deployment race underway.

And it leaves one haunting question hanging in the silence:

When an AI-enabled system produces an unlawful strike — who is accountable?

The law still assumes human intent, command authority, and judgment. Yet modern military AI is built to shrink timelines, blur decision chains, and multiply actors until no one clearly owns the outcome.

That’s where the accountability problem doesn’t just begin — it accelerates.

The rules are clear, accountability isn’t

The first mistake in this debate is to assume autonomous weapons sit outside the law.

They do not.

International humanitarian law still applies. States can still be held responsible for internationally wrongful acts. Individuals can still, in principle, face prosecution for war crimes.

That much is not especially controversial.

As Dr. Vincent Boulanin, Director of the Governance of AI Programme at the Stockholm International Peace Research Institute (SIPRI), puts it simply, while speaking with Invezz.

“States have agreed in the context of their diplomatic talks at the UN … that humans must retain responsibility for the development and use of autonomous weapon systems because machines cannot be held accountable for violations of international humanitarian law.”

That is the formal position. The difficulty starts when theory meets operations.

Boulanin’s formulation is useful because it is more precise than the popular phrase “accountability gap.”

He does not argue that the law disappears when AI enters the chain. He argues that the mechanisms for tracing, scrutinising, and attributing violations become much harder to use in practice.

His point is not that state responsibility and individual criminal responsibility are irrelevant.

It is that both become difficult to operationalise when the relevant conduct is distributed across programmers, commanders, acquisition officials, operators, intelligence analysts, and commercial vendors.

That is also why the issue is bigger than “killer robots.”

In practice, the pressure point is not only fully autonomous weapons, but AI decision-support systems that shape targeting, recommend objects of attack, rank threats, compress intelligence review, and present conclusions to humans under severe time pressure.

Once the machine narrows the field and the human is reduced to a fast confirmation step, the formal presence of a person in the loop does not necessarily mean meaningful human control still exists.

Michael N. Schmitt, one of the best-known scholars in the law of armed conflict, captures that distinction well.

The problem, as he has argued, is not that the law of armed conflict stops applying to autonomous systems.

It becomes far harder in practice to determine who made which decisions, on what information, and with what level of intent.

That is the difference between law on paper and accountability in the real world.

The Pentagon memo says “speed wins.” It orders the Department to “weaponise learning speed,” measure cycle time as a decisive variable, and treat the risks of delay as greater than the risks of imperfect alignment.

Those are not neutral management choices.

They change how much time humans have to understand, question, and override machine-generated outputs.​

When a human is present, but no longer deciding

The strongest way to understand the accountability problem is through battlefield practice rather than legal abstraction.

The clearest public case remains Israel’s use of the Lavender system in Gaza, which multiple reports said was used to identify large numbers of potential targets.

Reporting by The Guardian, citing Israeli intelligence sources, said Lavender at one stage identified up to 37,000 Palestinian men allegedly linked to Hamas or Palestinian Islamic Jihad.

The same reporting said the military used pre-approved civilian casualty thresholds for some categories of strikes.

That case matters not only because of the scale, but because it shows what happens when AI-assisted targeting becomes routinised.

The machine does not need to fire the weapon itself to reshape responsibility. It only needs to structure the decision.

Once an officer is reviewing machine-produced outputs inside an accelerated workflow, the legal image of a commander calmly weighing proportionality and distinction begins to look less like reality and more like a procedural fiction.

Richard Moyes, managing partner at Article 36, gets to the heart of this better than most policymakers do.

“If we do not know how an autonomous decision was made, or where the information computers present to commanders has come from, or how recent it is, then human decision-making stops being meaningful,” he told Invezz.

“International law in conflict is based upon human decisions, human moral engagement and accountability for those choices.”

That line matters because it moves the debate away from slogans. The real issue is not whether a human being technically touched the process somewhere.

The issue is whether the human still exercised judgment in a way the law can recognise.

If the data provenance is unclear, the system logic opaque, the timeline compressed, and the institutional expectation tilted toward speed, then the person at the end of the chain may be acting less like a decision-maker and more like a legal shock absorber.

The Pentagon memo points directly toward that world.

Its “Agent Network” project calls for “AI-enabled battle management and decision support, from campaign planning to kill chain execution.”

Another initiative, “Open Arsenal,” aims to accelerate the “TechINT-to-capability development pipeline,” explicitly “turning intel into weapons in hours, not years.”

Those phrases are unusually candid. They show the Department is not experimenting at the margins. It is trying to compress the full path from information to action.​

The black box problem is not just technical

Boulanin identifies four reasons accountability becomes especially difficult in the autonomous weapon systems (AWS) context.

First, the law itself remains unsettled on how some international humanitarian law (IHL) rules should be interpreted and applied to autonomous systems.

Second, AI unpredictability compounds existing disputes about state and individual responsibility.

Third, the development and use of these systems involves a large number of actors, making responsibility hard to distribute or attribute.

Fourth, the “black box” nature of AI complicates efforts to investigate specific incidents and trace conduct back to particular agents.

That last point is often misunderstood. The black box problem is not only about engineers failing to explain model outputs. It is also institutional.

Even if some technical logging exists, investigators still need access, chain-of-custody integrity, and a legal framework capable of translating logs into responsibility.

Boulanin notes that digital logs and auditing mechanisms could, in theory, help trace conduct back to one or more actors.

But he also warns that the practical implications are not well understood.

That caveat is crucial. The existence of data is not the same as accountability.

A digital trail only matters if courts, investigators, and military institutions are willing and able to use it. So far, there is little evidence that they are.

No major legal system has yet produced a settled, high-profile precedent showing how AI-assisted battlefield decisions would be reconstructed in court across the full chain of design, procurement, deployment, and use.

Diplomacy is stalling as deployment speeds up

The accountability problem would still be serious if states were racing to build a stronger international regime around it. They are not.

Reuters reported this month that 128 states are discussing whether they can reach a consensus on a non-binding text on lethal autonomous weapons systems before the current mandate ends in September.

The chair of the Geneva talks said progress on rules is “urgently needed,” a phrase that reflects how late this process already is.​

That timeline matters because the military and diplomatic tracks are moving in opposite directions. In Geneva, states are still debating baseline rules.

In Washington, the Pentagon is already accelerating battlefield AI adoption, model deployment, and contract redesign.

The US memo makes no meaningful attempt to pause for a clearer global framework. Instead, it treats speed itself as a strategic advantage.​

Moyes is blunt about the political blockage.

“International law needs to be updated to ensure a baseline of human judgment, control and accountability in the use of autonomous weapons and AI targeting systems,” he told Invezz.

“Some of the same states that are using these systems are blocking the adoption of new legal rules – and it is civilian populations that will pay the price.”

That observation deserves more attention than it gets.

The states with the greatest capability and strongest operational incentives to preserve flexibility are also the states best positioned to slow or dilute new rules.

Consensus-heavy diplomatic formats make that easier.

So the gap does not persist because nobody sees it. It persists because the actors most capable of closing it often benefit from leaving it open.

Pentagon memo institutionalised the void

It is important not to overstate what the January 9 memo does. It does not repeal the law of armed conflict.

It does not formally abolish human responsibility. It does not, by itself, authorise unlawful strikes.

But it does something arguably more consequential.

It institutionalises a doctrine under which speed, scale, model freshness, and the removal of vendor-imposed use constraints become official procurement priorities.​

The memo’s language is revealing all the way through. It calls for experimentation with America’s leading AI models “at all classification levels.”

It says denials of CDAO data requests must be justified within seven days and can be escalated to senior leadership.

It creates a “Barrier Removal Board” with authority to waive non-statutory requirements. It says the Department must “approach risk tradeoffs … as if we were at war.”

None of that proves illegality. But it does show an institution trying to strip friction out of the system.​

And friction, in this context, is often where accountability lives.

Slow review is friction. Documentation is friction. Legal hesitation is friction. Model restrictions are friction. Human doubt is friction.

Once the institutional mission becomes the removal of blockers, those safeguards begin to look, from inside the system, like inefficiencies rather than protections.

There is another revealing detail in the memo.

It says that “special initiatives outlined in classified annexes” and in “the Classified Annex provided by separate cover” will also be accelerated.

So even the public version of the strategy points toward a larger classified architecture that remains outside public scrutiny.

That does not mean the hidden material is necessarily unlawful.

It does mean the public is being asked to trust a system whose accountability mechanisms are already strained, while some of its most consequential details remain secret.​

The real question is smaller, but more damning

The most persuasive version of this narrative is not that autonomous weapons have created a complete legal vacuum. They have not.

It is that they are helping produce a world in which legal responsibility remains available in theory but less reachable in fact.

Because if Boulanin is right, the legal routes are there, but hard to use.

If Moyes is right, human judgment ceases to be meaningful when the machine’s reasoning is opaque and the data foundation uncertain.

And if Schmitt is right, the central difficulty is practical enforceability: identifying who decided what, on what basis, and with what intent.

Put those three arguments together, and the Pentagon memo starts to read less like a technology strategy and more like a governance document for the erosion of accountability.

It does not announce that erosion openly. It normalises the trade-offs that make erosion likely.

Someone will bear the cost of moving fast before accountability is solved. The memo makes clear that the Department is prepared to accept that risk.

The post Anthropic-Pentagon clash raises key question: who is to blame if AI kills? appeared first on Invezz

You may also like