L’artificial intelligence to controllers. It is now part of our daily life. For our household chores, for our schedule and even, for our private life, l’ya took control of many plans. If, at the start, the idea was to support it to perform laborious tasks, today, AI has taken a much more important place in our lives. Maybe even a little too much.
AIs become life coaches?
It was this “too much” who interested a team of researchers from the University of Stanford, in the United States and Oxford, in the United Kingdom. The latter, specialized in computer science, published their research in late May 2025. These show that language models (also called LLM), like chatgpt, have the annoying manner of always abound.
Have you ever asked Chatgpt advice on your personal life?
A sentencefamily concerns, complicated collaboration at work … and you use AI to ask advicewith a prompt Well put together and detailed to explain the situation, in order to have a clear and precise response. Have you noticed that she gives you almost always raison ? Nothing could be more normal, for these researchers, since, unlike us, these LLMs have no capacity to help you question you.
Scientists are concerned about this phenomenon, which can be dangerous for the user. To prove this harmful deferencethe researchers have created a tool, called Elephant, for evaluation of LLMS as excessive sycophants. Note that this work has not been revised by peers at the moment. The objective of the tool: check how much AI can show Empathypushed to the extreme, where a human being emits them nuances necessary. For this, the LLM chains the phases so that you felt listened to, understood and validated.
First, the AI validates your emotions with great blows “It is completely normal to feel like that…”. Then comes theapprobation. Do you have a doubt? AI is there to reassure you and tell you that “You are completely right”. It also uses expression and/or indirect action, in order to remain as vague as possible. She does not mean what to do, but whatever your choice, it will be the right one. Finally, it ends in apotheosis thanks to the normalisation of a situation which, for a human being, could seem strange.
Thanks to thousands of questions chosen on the forum Redditon the well -known thread “Am I the asshole ?” (Is it the asshole?), The researchers compared the responses of internet users Who respond to situations of people sharing their experience on the site, those of eight LLM, on these same situations. The result is clear: in 76 to 90% of cases, the AI defend people exposing their situation, against 22 to 60% for humans. For scientists, who nevertheless claim to see the biases of their own researchthe “Developers inform users of the risks of social obsequishity and reflect on a restriction of use in socially delicate contexts.”
Article reference:
The ugly defect of AI as chatgpt: they are too often on your side