Tuesday, August 19, 2025
HomeTechnologyThe Claude receives a new function against painful users (and that could...

The Claude receives a new function against painful users (and that could change everything)

Anthropic continues the development of Claude, his artificial intelligence. After having a new memory function on demand, the company now turns to another aspect: the “well-being of models”, allowing it to end “painful” conversations.

Claude Anthropic IAClaude Anthropic IA
Credits: Anthropic

Companies that develop artificial intelligence models equip their chatbots with safeguards and filters of content, in order toprevent their chatbot from slipping – As has been the case several times with Grok, Elon Musk’s AI. Concretely, these AIs are programmed to refuse certain requests, such as the creation of computer viruses, weapons or drugs, but also to dodge (or even censor) subjects deemed sensitive (sexuality, violence, politics, etc.). The objective? Protect both users and the company.

In some more extreme cases, the models even have a cut -off mechanism allowing them to put an end to a conversation. And it is precisely this new feature that anthropic just equipped its AI, baptized Claude. But this time she serves a very different goal.

Claude can now interrupt a conversation to protect itself

Anthropic continues to enrich Claude New options, such as a memory on demand, which allows the user to establish a more controlled relationship with AI. The company comes indeed to announce a new feature : the capacity for its most recent models, Claude Opus 4 and 4.1, ofinterrupt a conversation with a user in ” rare extreme cases of persistent, harmful or abusive interactions ».

According to Anthropic, this functionality will only intervene last appealIn ” extreme and marginal cases “, After several attempts at vain redirection or if the user himself requests it. Claude will also not be able to use this function in the event of a risk of imminent injury, whether for the user himself or for others. Concretely, the user will no longer be able to send messages to a conversation if the chatbot estimated that it was dangerous (sexual content involving minors, acts of large -scale violence, etc.). But according to our colleagues from Engadget, it will not have No consequences for other exchanges : the user can create another discussion immediately, or even return to the history of the problematic conversation to create new branches by modifying their responses.

But this novelty has not been created to protect users -or at least not directly-, but AI itself. Indeed, it is part of the new anthropic research program, set up for Study “the well-being of models”. It is for the moment a experimentationand users are invited to give their opinion if they are faced with the interruption of one of their conversations by Claude.

aspen.coleman
aspen.coleman
Aspen climbs Colorado fourteeners with scientists to report altitude-medicine breakthroughs firsthand.
Facebook
Twitter
Instagram
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments