Medical ethics
“For the end of life, AI is sometimes more reliable than loved ones”
To guess the will of a patient in a coma, recommend the cessation of treatments … The use of algorithms in medicine extends and raises important ethical questions.

The growing use of artificial intelligence in the medical field drains fundamental ethical issues. (Illustration image)
Getty Images
- Artificial intelligence demonstrates a precision superior to relatives to interpret the wishes of patients.
- Social robots raise ethical questions in the care for the elderly.
- AI already excels in the early detection of tumors by medical imaging.
- Algorithms may perpetuate existing discrimination in the health system.
How far can we trust artificial intelligence in care, including palliative? Will algorithms soon have the right of life and death on certain patients? Who will be responsible in the event of a medical error?
The growing use of artificial intelligence in the medical field drains fundamental ethical issues. Bioethician and director ofInstitute of Humanities in Medicine From CHUV, Professor Ralf Jox evokes the sensitive questions that will ask themselves in the not so distant future.

Ralf Jox is director of the Institut des Humanités en Medicine du CHUV.
DR
Is it fully managed by artificial intelligence (AI), is it realistic?
Yes and no. In the field of health, the potential of AI is enormous, whether for diagnosis, therapeutic prognosis, monitoring, even therapy itself, as do some Chatbots in psychotherapy. It is quite possible to imagine that the whole chain of care is managed by AI. But our research focuses on what surrounds care and which must remain human: ethically relevant aspects. This covers decision -making, clinical reasoning and communication.
You work in particular on the end of life, an area that seems difficult to delegate to an algorithm.
The patient’s informed consent and his therapeutic project (Editor’s note: For example, via its Anticipated guidelines) cannot be replaced by an algorithm, this is a certainty. On the other hand, one can wonder if the AI is capable of replacing the judgment of a person who is in a coma or who suffers from dementia. Studies show that by informing the patient’s medical but also socio-demographic data, it is possible to create a sort of digital twin that AI will question: what would the patient do in this scenario? Would he want to be resuscitated? We found that the result is generally fairer than to question relatives who have experienced decades with the patient.
That’s to say?
During comparative studies, the elderly and their entourage were interviewed on the basis of concrete cases. Patients had to indicate what they would like if such a situation happened (Editor’s note: in which he would no longer have their capacity for discernment). And their loved ones were questioned in parallel. In around 40% of cases, relatives were mistaken on the supposed will of patients. AI does better.
The supposed will of the patient depends in particular on the therapeutic prognosis. Again, do you delegate to the machine?
AI can issue recommendations and recommend continuing treatment or, on the contrary, stopping care. Then, it is obviously the loved ones to give their agreement. A study conducted in the United States shows that the majority of families would choose to rely on such a tool. It is understandable, because it discharges them from a potentially very heavy choice.
Imagine that AI is wrong. Who is responsible?
This is a central question. Delegating the responsibility to the machine is not the objective, we want to maintain humans in the decision -making process. With, however, a central question: if the algorithm is superfiniable, will doctors and relatives succeed in thinking differently, or even contradicting it?
Medicine is already criticized for having lost its human side, partly because of its ultra -technical. Isn’t that an additional step in this direction?
The AI can actually contribute to this depersonalization by reducing professional-patient contacts. Patient status is not easy to manage, it implies being listened to, consoled, accompanied … If it deletes these aspects, AI will hinder the patient’s recognition as a person. It could even accentuate the loneliness of certain patients, especially seniors. We must take care to keep a balance.
Except that the AI will be used to save wages, not to unload the caregivers so that they spend time with patients.
Technical professions such as radiologists and pathologists will probably be the most affected. This will be less the case for nurses whose mission has an important share of human contact. Nevertheless, we can perfectly imagine robots that toilet or dress residents of EMS. There are even social robots, imitations of animals, which are able to play with them. Is it ethical to offer that to seniors who have dementia and who think they are playing with a real cat? It is very sensitive.
How much AI is already used?
It is very variable depending on the areas, but it is already one of the tools that can help doctors, especially with regard to medical imaging. Take the case of mammography for which AI is more effective than the human eye to detect tumors. If she is able to spot them earlier, it will allow them to be better careful and therefore to heal better. It can also avoid overexamels or false diagnoses. It is in principle beneficial for patients and for health costs. On the other hand, there is not yet a predictive algorithm for end -of -life questions, like the one we have just talked about. We are developing a model, we will see when it can be applied.
As we know, our health system is biased. Depending on our origin or our genre, we are not treated in the same way. However, it is in this sociosystem that AI trains. So she will perpetuate, even accentuate the biases?
The AI will never be representative of the patient population. If it is not intentionally corrected, it will be discriminating as is the current health system. The biggest challenge will be to discover these biases and discrimination once the AI is widely applied.
Did you find an error? Please report it to us.