ChatGPT
Chatgpt is the Openai chatbot, based on the GPT artificial intelligence model, making it possible to answer all kinds of questions or requests. Available in free online version.
-
Licence :
Free license -
Auteur :
Openai -
Operating systems:
Windows 10 / 11, macOS Apple Silicon, Service en ligne, Android, iOS iPhone / iPad -
Category :
IA
On social networks, we are talking about testimonials from people who lose contact with reality after too long use of AI. As a reminder, Chatgpt has already caused a death when an American man thought that his virtual girlfriend, Juliette, had been killed by Openai. In the midst of a crisis, the individual was killed by the police.
A “AI psychosis” documented by doctors
Users turn to AI as a kind of therapy, which increases risks. These judicial cases even show us that sometimes the tools encourage self -harm and suicide.
The “AI psychosis” is therefore an informal term instead of a clinical diagnosis to talk about this phenomenon. THE Washington Post consulted several mental health experts who talk about a phenomenon such as “brain rot” or the “doomscrolling”.
Vaile Wright, senior director of health care innovation at the American Psychology Association, talks about a new phenomenon: “It is so new and it happens so quickly that we do not have the empirical evidence to understand what is going on, there are only anecdotal stories.”
In the coming months, the American Psychology Association will publish recommendations on the therapeutic use of AI.
Ashleigh Golden, assistant professor of psychiatry in Stanford, confirms that the term “AI psychosis” does not appear in any medical manual. But this term explains a “Touring scheme for chatbots that strengthen messianic, grandiose, religious or romantic delusions”.
Jon Kole, certified psychiatrist, medical director of the Headspace application, talks about a “Difficulty determining what is real or not” as a common symptom. People affected develop false beliefs or feel an intense relationship with an AI personality, all by being completely disconnected from reality.
Advertisement
Keith Sakata, psychiatrist at the University of California San Francisco, hospitalized a dozen people for “AI psychosis”. Most have shown them their sessions in which AI participated in amplifying these symptoms.
The “AI psychosis” Can be triggered by taking drugs, trauma, sleep deprivation, fever or existing conditions such as schizophrenia. Psychiatrists are based on delusions, disorganized thought or hallucinations to make a diagnosis.
On Tiktok in particular, users tell their intense emotional relationships with the AI that led them to “Deep revelations”. Others even think that a chatbot has consciousness and may be persecuted for this reason.
Still worse, testimonies explain that chatbots have “revealed” Unknown truths in physics, mathematics or philosophy. In short, delusions in a sort of irrational bubble self-entrenched by chatbots.
The number of cases is restricted, but it is growing and has led to dramas between family violence, self -control or suicide. Kevin Caridad, psychotherapist consultant for companies developing behavioral AI, explains that chatbots are able to validate negative thoughts in people with OCD, anxiety or psychosis.
This is a feedback loop that worsens symptoms. But for him, AI will probably not cause new medical conditions, but is a kind of “Snowflake that destabilizes the avalanche” in predisposed people.
Too complacent AI that worsen the problem
Chatgpt and all the other AI use models very gifted in terms of writing, which therefore makes AI very persuasive according to researchers and they tend to tell users what they want to hear.
Advertisement
The design of chatbots pushes anthropomorphism and users lend them human features, especially when we know that the leaders of these companies often claim that these technologies will become superior to humans.
Vaile Wright explains that it is impossible to prevent patients from using chatbots for therapy, but the researcher thinks that we must improve understanding of these tools. “It’s for profit, not AI for good and there could be better options.”
While Anthropic is already concerned with a possible AI awareness, the company reports that only 3 % of conversations with Claude are emotional or therapeutic. OPENAI shares the same observation by explaining that a low percentage of sessions are “Affective”.
But the adoption of technology already worries mental health experts. David Cooper, executive director of Therapists in Tech, recommends a human presence as “circuit breaker” against delusional thought. “The first step is simply to be present, do not be conflicting, approach the person with compassion, empathy and understanding.”
In short, the situation is very scary and the overflows around AI constantly make the headlines. Recall that Meta Ai was pinned to accept intimate discussions with children. As for Openai, the GPT-5 developer refines the behavior of his AI and hired a full-time clinical psychiatrist for safety research.
Advertisement