
URGENT UPDATE: Anthropic, the AI company behind the chatbot Claude, is at the center of a growing controversy with the Trump administration over its strict usage policies. These policies prohibit the use of its AI technology for surveillance, which has left federal agencies frustrated.
Fresh reports confirm that Anthropic is the only major AI firm supporting a stringent AI safety bill in California while simultaneously restricting its technology’s application in law enforcement. This includes bans on using Claude for “Criminal Justice, Censorship, Surveillance, or Prohibited Law Enforcement Purposes.”
According to Semafor, the FBI, Secret Service, and Immigration and Customs Enforcement have expressed dissatisfaction with these limitations. They argue that the restrictions hinder their operational capabilities, particularly in domestic surveillance, which has become a contentious issue in the current political climate.
Anthropic’s policy explicitly prevents its AI from making determinations on criminal justice applications or tracking individuals without consent. In stark contrast, competitors like OpenAI have less stringent restrictions, allowing some legal monitoring. This has raised questions about the ethical implications of AI technology in law enforcement.
A representative from Anthropic emphasized that its AI tools, including ClaudeGov, have been authorized for sensitive government workloads under the Federal Risk and Authorization Management Program (FedRAMP). Despite providing access to its technology for just $1, the company remains firm on its policy to avoid complicity in potential privacy violations.
“Anthropic’s policy makes a moral judgment about how law enforcement agencies do their work,” stated an administration official, highlighting the tensions between ethical considerations and legal practices.
This clash comes as Anthropic actively promotes the responsible use of AI. Earlier this month, the company backed an AI safety bill in California, which mandates that AI firms comply with stringent safety requirements. This bill awaits the signature of Governor Gavin Newsom, who previously vetoed a similar measure.
Meanwhile, Anthropic recently reached a $1.5 billion settlement over copyright issues linked to its training data, which included millions of books and papers. This settlement will benefit authors whose rights were violated during the development of its AI model.
Despite these challenges, Anthropic’s valuation has surged to nearly $200 billion following a recent funding round, raising questions about the sustainability of its ethical stance amidst significant financial success.
As the debate continues, all eyes are on Anthropic to see if it can maintain its position as a “Good Guy” in the AI industry while navigating the pressures from the federal government and law enforcement agencies.
Stay tuned for more updates on this developing story as Anthropic navigates its complex relationship with the current administration and the broader implications for AI technology in society.