WASHINGTON, March 3, (V7N) – OpenAI announced on Friday that it would amend its contract with the Pentagon to ensure its artificial intelligence (AI) models would not be used for "domestic surveillance" of U.S. citizens. This decision follows criticism over the lack of oversight and concerns about the potential misuse of AI by military and intelligence agencies.
In a statement posted on X, OpenAI CEO Sam Altman emphasized that the company's AI system would adhere to applicable laws, including the Fourth Amendment, which protects citizens against unreasonable searches and seizures. "The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," Altman wrote.
The Deal and the Backlash OpenAI secured the contract with the U.S. Department of Defense late Friday, shortly after its competitor, Anthropic, refused to agree with the military that would allow the use of its Claude AI models without conditions. Anthropic raised concerns about the potential for surveillance and the use of AI for autonomous weapons, prompting backlash from U.S. officials who questioned the company's reluctance.
Altman’s statement clarified that OpenAI’s models would come with "technical safeguards" to prevent misuse in surveillance and autonomous weaponry. However, due to growing skepticism regarding the efficacy of these safeguards, Altman announced on Monday that OpenAI would amend the contract to include additional protections.
Revised Agreement and Safeguards Altman stated that, in addition to barring domestic surveillance, the new deal would ensure that OpenAI's AI models would not be used by U.S. intelligence agencies, including the National Security Agency (NSA). The CEO acknowledged that the company had acted hastily in securing the deal and admitted the complexity of the issues involved.
"We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication," Altman said. "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety."
Anthropic's Legal Challenge In a related development, Anthropic has vowed to challenge the U.S. government's decision to designate it as a "supply chain risk" following its refusal to cooperate with the military. The Pentagon's move effectively prevents any company that works with the U.S. military from engaging with Anthropic.
Anthropic has expressed concerns that the punishment sets a dangerous precedent for American companies negotiating with the government, warning that it could undermine the principles of corporate autonomy and ethical responsibility in tech.
As the debate over AI's role in national security continues to evolve, both OpenAI and Anthropic are navigating complex legal and ethical issues about the responsible use of emerging technologies.
END/WD/RH/
Comment: