OpenAI has revised its agreement with the U.S. Department of Defense to explicitly prohibit the use of its artificial intelligence technologies for domestic surveillance of American citizens. The move comes after widespread criticism of the original deal, which appeared to grant the Pentagon broad access to OpenAI’s AI systems for any lawful purpose.
Initial Controversy and Trump Administration’s Role
The initial partnership, announced Friday, coincided with President Trump’s directive to federal agencies to halt their use of AI developed by OpenAI competitor Anthropic. This timing raised questions about political influence over AI procurement decisions. Under the first iteration of the deal, OpenAI maintained the right to impose “technical guardrails” on its technology to ensure compliance with its safety principles, but the contract’s open-ended nature sparked fears about potential misuse.
Revised Contract Details
The amended agreement now includes clear restrictions against deliberate surveillance of U.S. persons or nationals, as well as the acquisition or use of personal data for tracking or monitoring purposes. OpenAI asserts this aligns with existing federal laws governing privacy and civil liberties. The company stressed its commitment to upholding its stated safety standards while still collaborating with the defense sector.
Pentagon’s Response and Anthropic’s Stance
The Defense Department released a statement suggesting it was receptive to negotiation, unlike Anthropic, which they accused of prioritizing personal disputes over cooperation. The Pentagon’s willingness to discuss terms contrasts with Anthropic’s refusal to engage in similar talks.
The updated contract is a direct response to public backlash over the original agreement. The implications are significant: it signals growing pressure on AI developers to balance national security interests with civil rights concerns. This case highlights the delicate balance between military applications of AI and the need to safeguard individual privacy, and it raises questions about how similar deals will be structured in the future.
Ultimately, OpenAI’s decision to amend its deal demonstrates that even in high-stakes partnerships with government entities, public scrutiny can force companies to prioritize ethical considerations.


























