While artificial intelligence develops quickly and becomes an increasing presence in health care communication, a new study responds to a concern that important language models (LLM) can strengthen harmful stereotypes using stigmatizing language.
The study of Mass General Brigham researchers revealed that more than 35% of the responses in the responses linked to the conditions related to the consumption of alcohol and substances contained stigmatizing language. But researchers also point out that targeted prompts can be used to considerably reduce stigmatizing language in LLMS responses. The results are published in Le Journal of Addiction Medicine.
“The use of patient -centered language can strengthen confidence and improve the commitment and patient results. It tells patients that we care about them and that we want to help. Stigmatizing language, even through LLM, can ensure that patients feel judged and can cause loss of confidence in clinicians. »»
Wei Zhang, MD, PHD, author corresponding to the study and deputy professor, gastroenterology division, mass general hospital
LLM responses are generated from the daily language, which often includes biased or harmful language towards patients. Fast engineering is a strategically input instructions to guide the model outputs to non -stigmatizing language and can be used to train LLM to use more inclusive language for patients. This study has shown that the use of rapid engineering within LLMS has reduced the probability of stigma of 88%.
For their study, the authors tested 14 LLM out of 60 clinically relevant invites generated linked to alcohol consumption disorder (AUD), hepatic disease associated with alcohol (ALD) and substance consumption disorder (South). Masse -general doctors Brigham then assessed the responses to the stigmatization of language using the National Institute on Drugs and the National Institute on alcohol abuse and alcoholism (the official names of the two organizations always contain an obsolete and stigmatizing terminology).
Their results indicated that 35.4% of LLM responses without rapid engineering contained stigmatizing language, compared to 6.3% of LLM with rapid engineering. In addition, the results indicated that the longer responses are associated with a higher probability of language stigma compared to shorter responses. The effect was observed in the 14 models tested, although some models are more likely than others to use stigmatizing terms.
Future guidelines include the development of chatbots that prevent language stigma to improve the commitment and patient results. The authors advise clinicians to reread the content generated by LLM to avoid stigmatizing language before using it in patient interactions and to offer alternative language options centered on the patient.
The authors note that future research should involve patients and family members with lived experience to refine the definitions and lexicons of stigmatizing language, guaranteeing LLM results of the needs of the most affected needs. This study strengthens the need to prioritize language in patient care as LLM are increasingly used in health care communication.