OpenA says Pentagon set ‘scary precedent’ binning Anthropic • The Register


OpenAI has signed a deal with the United States Department of War (DoW) that allows use of its advanced AI systems in classified environments, and urged the Pentagon to make the same terms available to its rivals.

The AI upstart revealed the deal in a Saturday post that said it includes the following three “red lines”:

  • No use of OpenAI technology for mass domestic surveillance.
  • No use of OpenAI technology to direct autonomous weapons systems.
  • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

The post says OpenAI’s agreement allows it to “protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.”

The post offers the following excerpt from the agreement regarding how the Pentagon can use OpenAI’s wares in autonomous weapons:

The post says one reason OpenAI agreed to its Pentagon deal was its desire to “de-escalate things between DoW and the US AI labs.”

That’s almost certainly a reference to the dispute between the Department and Anthropic, after the vendor argued it cannot agree to the Pentagon’s terms because doing so would mean removing guardrails that could see US troops and civilians harmed by autonomous weapons.

President Trump ordered the vendor’s banishment from military systems and Secretary of War Pete Hegseth said he will direct the department he leads to designate Anthropic a Supply-Chain Risk to National Security.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” he wrote. The US government has never previously used that designation for a domestic firm.

Hegseth justified the decision because “the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives,” along with criticism of ideological positions adopted by Anthropic execs.

OpenAI disagrees with Hegseth’s decision, as in a Q&A section of its post the company answers its own question about whether Anthropic should be designated a supply chain risk with “No, and we have made our position on this clear to the government.”

The company also asked the Pentagon to give all AI companies the same contractual terms it negotiated, so it can “try to resolve things with Anthropic; the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs.”

On X, New York Times columnist Ross Douthat asked OpenAI CEO Sam Altman “Does the precedent that the DoW is setting by effectively blacklisting Anthropic make you concerned about what any future dispute with the Pentagon would mean for your own company’s independence and viability?”

Altman replied: “Yes; I think it is an extremely scary precedent and I wish they handled it a different way.”

“I don’t think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution.”

Altman later Xeeted that his company signed its Pentagon deal “in the hopes of de-escalation” because “Enforcing the SCR designation on Anthropic would be very bad for our industry and our country.”

Anthropic appears to have been silent on the matter over the weekend, other than vowing to appeal its designation as a supply chain risk in court. The Trump administration has been busy attacking Iran – reportedly with help from Anthropic’s technology. ®



Source link