The “Chatgpt psychosis” means intensive users of artificial intelligence (AI) who sink into a loss of contact with reality. Jean-Paul Santoro, clinical psychologist and co-founder of the site specializing in psychology of digital uses “Psycheclic.com”, decrypts this phenomenon and reveals the alert signals.
How to explain the “Chatgpt psychosis”, the degradation of the mental health of certain individuals because of the robot?
The term “Chatgpt psychosis” is obviously not scientifically validated. It is used because there is a resemblance between the phenomenon he describes and certain aspects of psychosis, namely confusion between reality and virtuality. The LLM (“big models of language” in French), these AIs which generate text, like Chatgpt, could, in some rare cases, provoke a phenomenon of psychic decompensation. Concretely, this happens when psychological balance breaks out.
Read also:
“Chatgpt psychosis”: a man sinks into dementia after several exchanges with artificial intelligence and ends interned
What will push users to think that Chatgpt has a conscience?
LLMs imitate human language. However, as the AI ”speaks” in the same register as the user, the latter can lend him characteristics he has, namely intelligence, consciousness …
Any individual who speaks with Chatgpt may have the temporary illusion of chatting with a real person, but keeping in mind that it is only a machine. On the other hand, users who may be more fragile will potentially be to doubt and think that it is not just a robot. They can then lend the tool of human components, even a consciousness, and therefore possibly a truth to his speech, which is just preprogrammed.
The “more fragile” people you mention are those with mental health disorders?
Yes, but not only. This is also the case for individuals who could suffer from latent disorders, but who do not know it themselves and whose relatives also ignore it.
In addition, this is only a hypothesis, but I think that people who are very used to virtual relationships, to the detriment of real encounters, could also be more vulnerable. In a society where we see less persons, then the difference between Chatgpt and a human with whom I communicate by message is smaller. This naturally remains to be validated scientifically.
Read also:
“We will live together in paradise”: Eliza, a robot, is accused of having led a young man to suicide
How can an individual realize that Chatgpt has harmful effects on his mental health?
The user must ask himself if he is consciousness to the robot. Concretely, he must observe the terms used to designate AI: does he speak of “chatgpt” or “my friend”?
Then, you have to pay attention to the consequences of use: moving away from loved ones, prefer to chat with the machine rather than a human, do it to the detriment of other activities, feel a lack when you cannot use it …
The central point that must alert is to be completely convinced that Chatgpt is true, of not having doubts at all. There, it is important to consult a mental health professional. It is frequently relatives that give the alert signal.
In general, the most important advice to follow is that if the user feels suffering, he must go to consult.
Read also:
“For me, he does better than a shrink”: Chatgpt, the new confidant?
Should we avoid discussing your mental health with Chatgpt?
People use AI a lot to talk about their feelings, so it seems difficult to stop this use. On the other hand, it would be necessary to act on the programming of Chatgpt. For example, he could systematically prevent that he is not a health professional. He does it in some cases, but not always.
Reflections are underway regarding the issue of Ethics of AI in mental health: transparency, respect for privacy, prevention of biases etc. Media education should be developed in the future, in particular to mention the risks and the healthy use of LLM.