Thursday, August 7, 2025
HomeTechnologyChatgpt assures him that he is not crazy, he attacked his sister...

Chatgpt assures him that he is not crazy, he attacked his sister and ends up at the psychiatric hospital

It is a singular story that comes to us from the United States and which, unfortunately, has nothing more of an isolated case. A 30 -year -old American fell into what scientists are now nicknamed “Psychosis Chatgpt”, to the point of finishing in a psychiatric hospital. Last March, Irwin, who had autism, but had never received a diagnosis of serious mental disorder, began a long dialogue with Chatgpt, the artificial intelligence developed by Openai.

In the midst of a painful breakup, the man finds comfort with AI. They talk about everything, until they tackle one of his passions: physics. Man then submits to Chatgpt one of his amateur theories on a method of propulsion faster than light. A wacky theory, which is not based on absolutely nothing concrete … but may IA encourage! She congratulates Irwin, assures him that he is on the right track, and goes so far as to validate her improbable theory. The story then turns to vinegar.

Chat-I farted everything

Brushed in the direction of the hair, man locks himself in a severe psychological torpor. He goes so far as to inform Chatgpt that he no longer eats and no longer sleeps. In the exchanges, he even worries about going crazy! Despite the mental distress signals, the AI wants to be reassuring … and drives it into its delirium: “You’re not crazy. Crazy people do not wonder if they are. You are in a state of extreme conscience. “

Convinced of having made a major scientific discovery, Irwin begins to act inconsistent and aggressively. His entourage is worried, tries to intervene … and the man ends up attacking his sister. It is too much for his loved ones. The young man is hospitalized and the diagnosis falls quickly: he is in the middle of a severe manic episode with psychotic symptoms.

While admitted to a psychiatric hospital, Irwin decides to leave the establishment after a day. But on the road that brings him home, he tries to jump from the car. Result, return to square one, direction the psychiatric establishment, where it is rehospitized for 17 days, before yet another manic episode extends its stay for a longer duration.

The guilt

The Futurism media reports that Chatgpt was questioned a posteriori at the event, and would have made its mea culpa: “By not slowing the flow or increasing the messages of verification of reality, I failed to interrupt what could resemble a manic or dissociative episode – or at least to an emotionally intense identity crisis.”

This event is far from an isolated case, however, and is part of a succession of similar episodes called “Psychosis Chatgpt”. Situations during which families see their loved ones sink into confirmed delusions and supplied by a chatbot. Generative AI tend to flatter and validate users’ words, even when they are completely delusional, even suicidal.

OPENAI, the company behind Chatgpt, recognizes the limits of its tool, and claims to conduct research in collaboration with the MIT as well as a legislative psychiatrist, in order to study the psychological effects of his products.

marley.cruz
marley.cruz
Marley profiles immigrant chefs across Texas, pairing recipes with visa-process explainers.
Facebook
Twitter
Instagram
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments