DoD Agrees to Use OpenAI Models on Classified Networks

The DoD has partnered with OpenAI for AI models on classified networks, following a standoff with Anthropic due to concerns over democratic values.

Overview

The Department of Defense (DoD) has reached a landmark agreement with AI company OpenAI to use its models on classified networks. This development follows a contentious standoff between the DoD and Anthropic, another prominent AI firm.

Background of the Standoff

In late 2023, the Pentagon issued a directive requiring AI companies like Anthropic to allow their models for "all lawful purposes." In response, Anthropic CEO Dario Amodei maintained that while his company did not object to military operations per se, it believed certain uses could compromise democratic values. Amodei's stance was backed by 60 OpenAI employees and 300 Google employees who signed an open letter supporting Anthropic.

Government Response

In a series of social media posts, President Donald Trump criticized Anthropic, calling its leaders "Leftwing nut jobs." He directed federal agencies to cease using Anthropic’s products after a six-month phase-out period. Secretary of Defense Pete Hegseth further claimed that Anthropic sought to exert "veto power" over the U.S. military's operational decisions and designated the company as a supply-chain risk, effectively barring contractors from doing business with it.

OpenAI's Agreement

Sam Altman, CEO of OpenAI, announced on Friday evening that his company had reached an agreement allowing DoD access to its models in classified networks. In a statement posted on X (formerly Twitter), Altman highlighted two key safety principles: prohibitions on domestic mass surveillance and human responsibility for the use of force. He stated that "the DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

Technical Safeguards

OpenAI will implement technical safeguards to ensure model behavior aligns with agreed-upon safety principles. Additionally, the company plans to deploy engineers at the Pentagon to support model safety initiatives. Altman stressed that "if the model refuses to do a task," then the government would not force OpenAI to comply.

Broader Implications

Fortune reports that during an all-hands meeting with employees, Altman assured them of the government's commitment to allow OpenAI to develop its own "safety stack" to prevent misuse. The agreement sets a precedent and pressures other AI companies to accept similar terms.


Source: Read Original Article

Related Articles

Post a Comment

Previous Post Next Post