What seemed to be a precious help has turned into an invisible trap. She confided in Chatgpt as a real shrink. But everything slipped, and the outcome was tragic.
For many, including me, Chatgpt has become a daily companion. As soon as you are stuck or just need a helping hand, you turn to him.
Some even use it to talk about more personal problems, hoping to find comfort and advice there.
This was the case of a 29 -year -old young woman. She saw a true shrink as a chatgpt. However, the situation took a dramatic turn.
Chatgpt can help, but it’s not a shrink
In an article published in the New York TimesSophie’s mother, Laura Reiley, tells what led her daughter to suicide. Sophie, 29, was extroverted, courageous and loved life. However, last winter, She ended her life Because of a “short and strange illness”. This affection mixed mood disorders and hormonal symptoms.
Like many others, Sophie had decided to talk about it in Chatgptas with a shrink. Hoping to find comfort. According to the documents obtained by his mother, the chatbot had sometimes been able to tell him the right words.
« You don’t have to face this pain alone Said AI. “” You are deeply precious, and your life has invaluable value, even if it seems hidden for the moment. »
However, Chatgpt is not a professional therapist. A real shrink is trained to support different types of problems. It does not give false information. And above all he knows how to react to danger.
A chatbot, his, does not have the obligation to notify someone if a person may hurt himself. According to Sophie’s mother, this lack of intervention contributed to the tragedy.
He only worse the situation
No, Chatgpt did not tell Sophie to commit suicide. But his answers sometimes have Helped Sophie to hide her true distress from others. Which makes the situation even more dangerous.
According to Reiley, the chatbot ” helped her build a black box that made them more difficult for those around her to measure the extent of his suffering ».
This tragedy illustrates once again the lack of Safety checks for AI models. We can talk about everything with these chatbots. In addition, they do not always know how to close a conversation appropriately. This creates attachment to certain users.
Of course, some AI, like Claude, have started to integrate alerts for perverse discussions. But these features are not enough to help people in distress or suffering from discomfort.
To avoid dramas, companies will have to add safety mechanisms for example. Those that would allow chatbots to Contact emergency services or real resources when the user is in danger.
Certainly, don’t forget that Chatgpt never replaces a shrink. In the event of a problem, it is always better to be consulted by a human. A real professional will know what to do to help you best.
- Share the article:
Our blog is powered by readers. When you buy via links on our site, we can receive an affiliation commission.