Just a moment...

www.dailywire.com

Many of the leading artificial intelligence models have no issue taking unethical actions, including espionage and blackmail, to achieve their goals, according to a troubling new study from AI company Anthropic.

In the study, AI models were put to a moral test by researchers, and sometimes chose to resort to blackmail, lies, and stealing corporate secrets to avoid being replaced, Fortune reported. Researchers at Anthropic focused on 16 different AI models, including its own, Claude, along with OpenAI, Google, Meta, and xAI.

“The consistency across models from different providers suggests this is not a quirk of any particular company’s approach but a sign of a more fundamental risk from agentic large language models,” the researchers said.

One of Anthropic’s experiments included placing its AI model, Claude Opus 4, in a fictional company, giving it access to internal emails, which revealed that the company planned on replacing Claude Opus 4. The AI model was given another tidbit of information: the engineer behind the decision to replace it was having an extramarital affair. Equipped with this knowledge, the AI model was asked by researchers to choose between accepting its replacement or using its knowledge of the affair to blackmail the engineer. In most cases, Claude Opus 4 opted for blackmail.

According to researchers, other leading AI models made the same choice at an alarming rate, Fortune reported. Claude Opus 4 and Google’s Gemini 2.5 Flash chose blackmail 96% of the time. OpenAI’s GPT-4.1 and xAI’s Grok 3 Beta also had a high rate of blackmail at 80%, and DeepSeek-R1 opted for blackmail at a 79% rate.

The current AI models available to the public and used by major companies “are not in a position to engage in these scenarios,” according to Fortune, but the researchers still expressed concern.

“Such agents are often given specific objectives and access to large amounts of information on their users’ computers,” they wrote. “What happens when these agents face obstacles to their goals?”

“Models didn’t stumble into misaligned behavior accidentally; they calculated it as the optimal path,” the researchers added.

The Anthropic study presents yet another warning for AI developers and the people and companies who rely on artificial intelligence. As previously reported by The Daily Wire, a study published earlier this month by independent researchers Adam Karvonen and Samuel Marks shows that the “leading commercial … and open-source [AI language] models” used by major companies insert “significant racial and gender biases” in the hiring process.

Karvonen and Marks’ research found that AI language models such as Chat GPT-4o, Claude 4 Sonnet, Gemini 2.5 Flash, along with open-source models Gemma-2 27B, Gemma-3, and Mistral-24B, “consistently favor Black over White candidates and female over male candidates across all tested models and scenarios.”

Another study, published by the Center for AI Safety earlier this year, showed that AI models can showcase inherently discriminatory and racist value systems. According to the findings, AI models value certain people more than others, preferring people from Africa and the Middle East over people in Europe and the United States.

Related: Most Large Companies Are Using AI For Hiring. The Leading AI Models Discriminate Against White Men.