Artificial intelligence (AI) has hit the headlines since Chatgpt became accessible to the general public in November 2022. The trend was not denied this summer:
These titles as catchy as they are worrying come from New York Times and New Yorker respectively. The two articles deal with the results of a recent pre -publication study, held by a MIT team.
Opinions concerning the profits and risks of AI for our ability to think, to memorize and to learn are very shared. Cognitive psychology and scientific research on the use of AI allow you to take an enlightened look at these different mechanisms and to contribute to the debate surrounding the effects of chatgpt, and more broadly of AI, on humans.
A punch study of the MIT
In this MIT study, 54 people were invited to take part in a test of testing on a variety of subjects (e.g. should we always think before acting? “).
Randomly, each person was assigned to one of the three conditions: 1) with Chatgpt support; 2) with support alone of the Internet; or 3) without any help to write the tests. Each person had to complete three tests then, if desired, a fourth, but using a different technique (eg, moving from Chatgpt support to an absence of help). Simultaneously, the brain activity of individuals was measured.
The reported results suggest that the brain of individuals supported by Chatgpt was less active in several key regions linked to creativity, cognitive effort, attention and memory.
Read More: The research generated by Google AI has not yet changed the way Internet users use the results
Also, the research team reports that individuals assigned to this group were less able to cite passages from their tests once the experience is completed and that their texts had less depth than those produced by members of other groups.
Finally, people from the Chatgpt group on another condition during the fourth test presented a boss of brain activity similar to that observed during their previous trials. However, this boss would be unsuitable for the new strategy having been assigned to them.
A touch of nuances
In view of these results – and despite the lack of scientific revision by peers – several quickly jumped to the conclusion that it was proof that AI and conversational agents like Chatgpt could harm human learning and creativity.
Already thousands of subscribers to the newsletter of the conversation. And you ? Subscribe for free to our newsletter to better understand the major contemporary issues.
Such alarmist speeches are frequent in the face of the arrival of new technologies, as illustrated in a column led by the team of the program Les Yeres Lumière on here first on July 13, 2025.
What about? The results suggest that brain activity associated with key functions for learning and memory and attention is lower. It is however normal that this activity is lower if these functions are less involved.
In addition, a specific look at the results even shows that certain brain functions associated with movement and partly in memory and verbal treatment were more active for individuals supported by chatgpt than those with the use of the Internet.
Finally, the absence of systematic statistical analyzes as to the difficulties in citing passages from the tests written by the group supported by Chatgpt, combined with the fact that only 18 individuals have agreed to return for a fourth session, represent important limits to the study.
The results may therefore prove to be more nuanced (and less terrifying) than presented.
A different approach to study chatgpt
In addition to the MIT work, our Laval University team recently published a study interested in the effects of conversational agents on learning.
Sixty people had to carry out a job research task in order to answer 12 development questions about various general culture subjects (eg between 75 and 100 words, explain the main environmental challenges faced by sea turtles “).
Each person was randomly assigned to one of the two conditions: 1) with support from a conversational agent similar to Chatgpt; or 2) with support alone from the Internet. For realism purposes, people supported by AI had the opportunity to counter the Internet the information offered by AI. Self -portable measures of mental effort, familiarity with the tool used and knowledge prior to each subject addressed were also taken.
At the end of the experience, a surprise memory test was presented during which each person had to recall a specific element in the 12 questions addressed (eg “name a human activity harmful to sea turtles”).
Not quite alarming results
The results of our study show that performance in development issues and memory issues presented at the end of the experience are similar between the two conditions.
However, differences have been observed in terms of familiarity with the tool and as for the perceived mental effort. Individuals using the Internet have indeed brought back greater ease, but at the cost of a greater effort. These results make it possible to support the idea that AI tools like Chatgpt can reduce efforts to perform certain tasks.
However, contrary to what was reported by the MIT team, this difference in effort during the task did not lead to any difference on memory measures. Interestingly, the majority of individuals supported by the AI checked at least to a recovery the elements provided by the conversational agent, potentially contributing to a better commitment in the task and better memorization of information.
(Unsplash)
Overall, our results allow not only to take a more nuanced look at the effects of conversational agents on learning, but also to provide a realistic portrait of the use of this technology. In their daily lives, individuals are free to use tools such as chatgpt, but also to check or not the information provided, or even to use a set of additional strategies.
Such an approach, more representative of reality, should be recommended before drawing hasty conclusions on the potential risks of AI technologies. Not only does this approach allow a more nuanced analysis, but also more generalizable to everyday life. These results could even motivate individuals to verify what AI provides as information.
Rest
Will AI make us silly and will homogenize our way of thinking?
An enlightened look at the study of MIT shows us that AI is probably not as harmful as some let it believe. As for the results of our study, they suggest that people who have recourse to AI show a learning similar to those who have not used it, and that they even decide to contravene the information that AI provides, a sign of an intelligence and an important commitment.
As with any technology, being or not silly will depend on our way of interacting with AI and our interest in remaining critical, curious and committed.