A 60-year-old American, who simply wanted to replace the salt in his diet, would have finished in the hospital in full psychosis after having poisoned himself with sodium bromide by following the medical advice from Chatgpt.
• Read also: AI would make doctors less efficient, according to a study
• Read also: Impact of new technologies on the mental health of young people: “concerns that are legitimate”, according to a psychiatrist
“This case also highlights the way in which the use of artificial intelligence (AI) can potentially contribute to the development of avoidable health problems,” said the medical team of Washington in Seattle, in a case report reported by “The Guardian”.
According to the study published on August 5 in the newspaper Annals of Internal Medicinea 60 -year -old was recently presented himself in a hospital, convinced that his neighbor was trying to poison him.
In the first 24 hours, the sixty-something would then have expressed “an increasing paranoia and hearing and visual hallucinations”, which would have led to his involuntary psychiatric internment after an escape attempt, we can read.
Once his state has been stabilized and his senses found, the patient would then have indicated to his healthcare team that he replaced the table salt for his diet for three months with sodium bromide, a substance used as a sedative at the beginning of the 20th century, noted the British media.
Except that an poisoning for bromide – called bromism – can cause a whole range of symptoms, including anxiety, hysteria and insomnia, so that its use was abolished by American food and Drug (FDA) between 1975 and 1989.
At the time, brumism was responsible for 8% of psychiatric hospitalizations, according to the study.
Decontextualized information
It is by asking for the advice of Chatgpt to replace table salt that artificial intelligence would have replied in particular that “sodium chloride [ou sel de table] Can be replaced by bromide, probably for other purposes, such as cleaning ”.
“The AI has the risk of disseminating decontextualized information, because it is very unlikely that a medical expert mentions sodium bromide in the face of a patient looking for a substitute viable to sodium chloride,” hammered the authors.
Without having access to the patient’s chatgpt history, the authors themselves carried out research with Chatgpt 3.5, confirming that the bromide was indeed mentioned.
“Although the answer specified that the context was important, it did not contain a warning […] Nor asked why we wanted to know, as a healthcare professional would, “they said.
In addition to warning the population as to the fact that AI systems can “generate scientific inaccuracies”, they believe that doctors will have to learn to adapt their care according to where their patients “consume health information”, they conclude.