At the beginning of July 2025, a 2024 study caused a stir: according to its authors, one biomedical article in eight would have traces ofartificial intelligence. Specific words, too smooth style, a turn that seems … artificial. Yes, AI is infiltrated discreetly into the laboratories, until signing, in the gentleness, an increasing part of global research. In other words, The most worrying is that nobody realizes it.
How to identify the scientific texts written (in part) by an AI?
It is a figure that makes you dizzy: 13.5 % of summaries published on PubMed in 2024 would carry the mark of a tool as ChatGPT. This estimate, it is owed to a team of researchers Northwestern University and L’In Institut Hertie for AI applied to health. Their method? A real word hunt: “Delve”, “Underscores”, “Showcasing” … so many terms formerly rare, now omnipresent in biomedical abstracts.
In other words, These stylistic clues are enough to arouse suspicion.
Is it irrefutable proof? Not really. Stuart Geigerprofessor at The University of California in San Diegoreminds us: language evolves naturally, and these words can simply reflect a stylistic trend. However, When the tendency in question follows the tics of a chatbot to the letter, doubt settles.
And this doubt is formidable. Indeed, What it shakes is not only the shape. It is confidence even in science.
When confidence vacillates: is AI a tool or an impostor?
This is not the first time that AI has testified our bearings. But this time, What changes here is his mimicry power. She is no longer content to help: she replaces, sometimes without knowing exactly when the machine took over from the human.
Kathleen Perleyof Rice Universityhowever defends an ethical use of AI: facilitating access to publication for non-English or neuroatypic researchers, helping to cross the language barrier, make the invisible visible. From then on, Why penalize those who use a tool to better transmit their ideas?
But the border between assistance and substitution remains unclear. Moreover, The tools supposed to detect the texts generated by AI are themselves unreliable. ZeroGPT believes to recognize 97 % of generation IA in the statement of independence from the United States. GPTZerohe talks about 10 %. Who to believe?
The real question may be there: if even the experts can no longer distinguish a researcher from a chatbot … in other wordswhat remains of scientific rigor?
A science always rigorous … but less and less human?
In appearance, nothing has changed. The articles continue to flock, the laboratories publish, the magazines evaluate. Yet, behind the scenes, theartificial intelligence acts as a ghost author. She is not yet ready to think about us, but she already knows how to write in our own way.
And that is a silent tilting. An invisible moult. An existential crisis that broods: ultimatelycan science remain human when its words are no longer?