In a March 2026 letter to Pentagon leadership, Deputy Secretary Steve Feinberg declared that Project Maven will become an official program of record by September 2026. Close to 25,000 US personnel are now using the AI-powered system, and Palantir has secured contracts worth $1.3 billion through 2025.
But the announcement came just weeks after Anthropic, maker of the Claude AI models integrated into Maven, was designated a supply chain risk by the Pentagon after refusing to allow unrestricted military use of its technology. The timing isn't coincidental. It's a preview of the battles ahead.
From Object Recognition to Kill Decisions
Project Maven began in 2017 as a drone imagery analysis tool, training AI models to identify military targets from surveillance footage and relay information to commanding officers for human verification. By 2024, Maven had evolved from a simple object recognition tool into something far more complex.
During the 2026 Iran conflict, Maven enabled striking over 1000 targets in the first day, with plans to reach 1000 targets per hour. The system's computer vision capabilities increased target processing from less than 100 per day to 1000, jumping to 5000 targets daily after integrating large language models.
In 2020, during exercises at Fort Liberty, an AI program scanned satellite images with instructions to identify targets. After human confirmation, the system sent firing instructions directly to an M142 Himars rocket launcher. The progression from analysis to action was becoming seamless.
By September 2025, the National Geospatial Intelligence Agency claimed Maven would begin transmitting "100 percent machine-generated" intelligence to combatant commanders using large language model technology by June 2026.
The Google Exodus and Its Lessons
In 2018, Google employees staged walkouts protesting the company's involvement in Project Maven. The estimated $9 million contract wasn't renewed, and Palantir took over. About 4,000 Google employees signed a petition stating "Google should not be in the business of war," followed by a dozen resignations and a second petition from more than 1,000 academics.
Google subsequently adopted AI principles pledging the company would not pursue technologies "likely to cause overall harm" or "cause or directly facilitate injury to people".
But there's a bitter irony here. Under the contractors that followed Google, the Maven program expanded to include the offensive targeting capabilities that had been explicitly excluded from Google's contract.
The Anthropic Standoff
In February 2026, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum: allow unrestricted military use of Claude AI models "for all legal purposes" or face contract cancellation. Anthropic sought narrow assurances that Claude wouldn't be used for mass surveillance of Americans or in fully autonomous weapons.
Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's demands. President Trump directed federal agencies to cease using Anthropic's products, and the company was designated a supply chain risk.
The dispute wasn't just about ethics. Reports emerged that Claude had been used in the operation to capture Venezuelan President Nicolás Maduro, accessed through Maven's workflow system. Anthropic executives reportedly reached out to Palantir to clarify what Claude had done in Caracas.
Retired Air Force General Jack Shanahan, who led Maven during the Google controversy, backed Anthropic's position, calling their red lines "reasonable" and noting Claude was already widely used across government in classified settings.
The Accountability Problem
A UN analysis in 2024 observed that ambiguity surrounding AI-based choices muddles legal accusations, producing no designated individual liable for transgressions. Human absence in essential determinations has fueled skepticism, especially where moral and legal expectations are fundamental to justified warfare.
When AI systems make independent decisions that lead to unintended consequences, the traditional "command responsibility chain" breaks down. This disconnect between "machine behavior" and "human responsibility" undermines the foundations of the laws of war.
The International Committee of the Red Cross warns that AI decision support systems' "speed and scale, exacerbated by automation bias, might lead to simple rubber-stamping by the human user, replacing human judgment rather than supporting it".
Where the Money Flows
Palantir received an initial $480 million Maven contract in May 2024, raised to $1.3 billion by May 2025. The Army signed a framework agreement potentially worth $10 billion over a decade. This equals the annual defense budgets of many mid-sized countries. Palantir's market value has climbed toward $360 billion, largely on military contracts.
CENTCOM used Maven to fuse 179 live data feeds, supporting operations across four-star headquarters with over 20 subordinate headquarters. CENTCOM had approximately 13,000 Maven accounts, with about 2,500 regular users.
The Global Stakes
In December 2024, the UN General Assembly passed a resolution on lethal autonomous weapons with 166 votes in favor, only 3 opposed (Russia, North Korea, and Belarus). The resolution endorsed a two-tiered governance system calling for regulatory monitoring of some systems and bans on others.
UN Secretary-General António Guterres called for a ban on lethal autonomous weapons operating without human control, with a legally binding instrument by 2026. The UN and International Committee of the Red Cross have pressed for a legally binding global accord to restrict autonomous weapons by 2026.
But definitions remain elusive. China considers only unstoppable systems as autonomous, while France includes devices capable of choosing their own targets. Such discrepancies present substantial obstacles to any worldwide treaty.
What Comes Next
Between now and September 2026, the military must roll out Maven's standardized features across all combatant commands. This period will be critical for assessing technical robustness and cultural integration within military branches.
Congress faces key questions: How should the US balance autonomous weapons research with ethical considerations? What restrictions should exist on development without human involvement? Are current weapons review processes sufficient?
Technical recommendations address urgent failure modes already visible in real-world military AI systems. Implementation rests with national actors, but each recommendation highlights issues where AI researchers can help define performance thresholds and ensure oversight reflects how these systems actually function.
The stakes couldn't be higher. As Pakistan's Defense Minister noted, AI lowers the threshold for using force, making wars more politically and operationally feasible while compressing decision time and narrowing windows for diplomacy.
The Maven program's elevation to official status settles nothing about the deeper questions it raises. If anything, it guarantees these battles will intensify. The next flashpoint is already visible: OpenAI "rushed" to finalize a deal without the constraints Anthropic sought, just hours before the US began the 2026 Iran conflict.
When profit margins meet missile trajectories, the companies willing to compromise on safety often win the contracts. That's the hidden cost of military AI that no Pentagon budget line captures.