Поиск по этому блогу

Search1

123

понедельник, 9 марта 2026 г.

Anthropic vs The Pentagon, and the Fallout for OpenAI

A dispute between the US Department of Defense and Anthropic has become the first major political crisis in the AI industry.‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  ‌  
  -  
1770012626775-strzt
1770012626775-strzt

Anthropic vs The Pentagon, and the Fallout for OpenAI

THE BELAMY

Weekly Newsletter of AIM

Monday, March 9, 2026 | By Mohit Pandey

Now, subscribe to our Digital & Print Editions >


A dispute between the US Department of Defense and Anthropic has become the first major political crisis in the AI industry. While OpenAI won the contract to replace Anthropic, the battle continues and is likely to worsen.

Last week, the Pentagon labelled AI company Anthropic a national security supply chain risk. The government asked federal agencies to stop using Anthropic's systems. The company is considering challenging the move in court. Amidst this, its competitor OpenAI signed a deal to deploy its models inside US defense systems.

The fight even triggered a boycott campaign called "Cancel ChatGPT". ChatGPT uninstall rates reportedly jumped 295% after the announcement.

Behind the public drama lies a deeper question. Who controls the rules of AI in warfare?

1773040534610-wmr7t
1773040534610-wmr7t

Anthropic was already working with the Pentagon. In July 2025, the company signed a $200 million contract with the Department of Defense. It integrated the company's AI model Claude into its workflows running on classified networks. 

And in many recent reports, the US government used Claude models for military decisions in the Iran conflict, including for analytical tasks such as intelligence processing and mission planning.

But now, Anthropic has imposed two restrictions on the technology. First, the model cannot be used for domestic mass surveillance of Americans. Secondly, it cannot power fully autonomous weapons capable of firing without human oversight.

Those guardrails triggered the Pentagon. It wanted broader access to the technology, which Anthropic declined. Defense officials argued the restrictions prevented agencies from using the models fully.

Anthropic CEO Dario Amodei said the company was concerned about the risk of AI being used for mass surveillance or autonomous weapons.

Washington responded with an ultimatum, saying that if the restrictions were not dropped, the contract would be lost. 

Anthropic held its position.

In a post on Truth Social, President Donald Trump said the United States "will never allow a radical left, woke company to dictate how our great military fights and wins wars." He added there will be a six-month phase-out period for agencies currently using Anthropic's products, including the United States Department of War.

"We don't need it, we don't want it, and will not do business with them again," he wrote.

The 'supply risk' is often placed for foreign companies linked to adversarial governments. However, this move to apply to a US AI firm is unprecedented. Contractors and agencies now face restrictions on the use of Anthropic systems.

Anthropic argues that the decision could create a chilling effect on AI startups trying to impose ethical limits on how their systems are used.

If the company wins the legal battle, it could set an important precedent. AI firms may gain stronger legal ground to impose restrictions on how governments use their systems.

But if it fails to win, the signal will be clear. Companies that want defense contracts will need to accept far fewer restrictions on the use of their technology.

Big tech is also walking on a tightrope with Anthropic. Large cloud companies are now navigating a complex landscape. Firms such as Microsoft, Amazon, and Google continue to support Anthropic through commercial partnerships.

At the same time, these companies maintain extensive relationships with defense agencies. The result is a split market.

Building Future-Ready Talent - How India's GCCs are Accelerating Innovation >>

In this video, we explore how Optum India is transforming the healthcare landscape by integrating advanced technology with a human-centric approach. Amit Vaish, Vice President and Head, People Team, Optum India, explains how their Global Capability Center (GCC) leverages a talent pool of AI and ML engineers to solve complex business problems while keeping the individual at the centre of every solution.

video_preview_1af2248799445ff2bceb610c02d17f33.jpg
video_preview_1af2248799445ff2bceb610c02d17f33.jpg

The leaked memo crisis

Anthropic's conflict intensified after an internal memo written by Amodei leaked.

In the message, Amodei suggested that the Trump administration disliked Anthropic because the company had not offered "dictator-style praise" to the president.

Amodei also accused OpenAI of misleading messaging and described its dealings with the Pentagon as "safety theatre". The leak triggered a political backlash.

Amodei later apologised for the memo and said it was written after a difficult day for the company. He said it did not represent a considered or refined view of the situation.

Speaking publicly after the controversy, he described the crisis as one of the most disorienting periods in Anthropic's history. 

OpenAI moves quickly

Hours after Anthropic was blacklisted, OpenAI announced a deal with the Department of Defense. The agreement allows its models to run inside classified US government systems

CEO Sam Altman framed the deal around two principles. The company says it does not support intentional domestic surveillance of Americans. And also, humans must remain responsible for decisions involving force.

This is a split.

Anthropic has built hard restrictions into its systems, while OpenAI presented guiding principles. That difference has become central to the debate. 

Amodei initially accused OpenAI of accepting weaker contract terms and misleading the public about the nature of the agreement. He later softened that criticism, saying the terms of the deal may have been improved

The Cancel ChatGPT movement

But all is not going well for OpenAI. The Pentagon deal triggered a backlash across social media, with a campaign called "Cancel ChatGPT" spreading online to switch to Claude.

Communities across Reddit, X, and developer forums began sharing guides explaining how to delete ChatGPT accounts. The reaction translated into measurable behaviour.

ChatGPT uninstall rates reportedly jumped 295% shortly after news of the Pentagon agreement. Claude's downloads increased at the same time.

Whether the backlash will produce lasting damage to OpenAI remains unclear. But the Pentagon deal also triggered internal friction at OpenAI. 

Caitlin Kalinowski, who led robotics and hardware programs at the company, resigned shortly after the agreement was reached. She cited concerns about surveillance and autonomous weapons. 

The resignation echoes earlier tensions inside Silicon Valley. In 2018, employees at Google protested the company's involvement in Project Maven, a Pentagon program that used machine learning to analyse drone footage. 

Google eventually withdrew from the project. The same ethical debate has returned in the AI era, now with Anthropic and OpenAI at the centre.

AI has now entered the same category as nuclear technology and cyber warfare tools. Governments want control, companies want guardrails. That conflict is now visible in the open.


Inside Osome's Playbook That Led to 100% YoY Growth

1772015696200-40narg
1772015696200-40narg

For most startups, accounting software lands with a promise and then becomes background noise. It files returns, throws up reports, and occasionally sends a reminder that you forgot something. 

In 2025, Osome tried a different route. Instead of adding another feature or two, the company rebuilt its experience based on customer feedback. Read more here.


Now, subscribe to our Digital & Print Editions >

For Brand collaborations, reply to this email or write to info@aim.media

You received this email because you signed up to the updates from AIM. Click here to unsubscribe if you do not want to receive emails from us.

  -  

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.