Skip to content

AI Firm Anthropic Aims to Emerge as a Moral AI Pioneer in Trump's Era of America

Limitations on law enforcement's use of certain practices and support for an AI safety bill were established, while the matter of unauthorized book copyright infringement was disregarded.

Artificial Intelligence firm Anthropic aspires to set itself apart as the ethical AI company in the...
Artificial Intelligence firm Anthropic aspires to set itself apart as the ethical AI company in the U.S. under the Trump administration.

AI Firm Anthropic Aims to Emerge as a Moral AI Pioneer in Trump's Era of America

In a significant development, the AI company Anthropic has received a "High" authorization from the Federal Risk and Authorization Management Program (FedRAMP), allowing its AI tools to be used by federal agencies. This authorization comes as Anthropic's chatbot, Claude, has been adapted into a version called ClaudeGov for the intelligence community.

The policy governing the use of ClaudeGov includes restrictions on applications related to criminal justice, surveillance, and censoring content on behalf of government organisations. The policy also prohibits the use of AI tools to track individuals' physical locations, emotional states, or communications without consent.

Anthropic has been at the forefront in supporting the AI safety bill in California, making it the only major player in the AI space to do so. The bill, yet to be signed by Governor Newsom, requires new safety requirements for major AI companies.

However, the policy has not been without controversy. Tensions have arisen between Anthropic and the Trump administration, with federal agencies feeling stifled by the restrictions on the use of Claude. Despite this, according to a source, Claude is being utilised by agencies for national security purposes, including for cybersecurity.

The settlement of a $1.5 billion piracy case involving millions of books and papers used to train Anthropic's large language model has not deterred the company's growth. Anthropic was recently valued at nearly $200 billion in a funding round.

In a move that has raised eyebrows, Anthropic has given the federal government access to Claude and its suite of AI tools for just $1. OpenAI, a competitor, has a usage policy that restricts the "unauthorized monitoring of individuals," which may not rule out using the technology for "legal" monitoring. However, OpenAI did not respond to a request for comment regarding Anthropic's policy.

Despite the ongoing debates, the availability of ClaudeGov across the intelligence community marks a significant step forward in the integration of AI technology in government operations. The exact details of the negotiations between Anthropic and the government-related agency about restricting the use of Claude remain unclear.

Read also:

Latest