"is it photo gaza?" :: This article explores the topic in depth.
Furthermore,
"is it photo gaza?" ::
But Grok is wrong: the photo was taken in Gaza on August 2, by Omar al-Qattaa, photojournalist for AFP.
The photo shows Mariam Dawwas. For example, 9, in the arms of his mother Modallala in Gaza-Ville, who explained to AFP that his daughter weighed 25 kilos before the war, against 9 today. Therefore, As medical support, it receives “only milk, and it is not always available. Meanwhile, It is not enough for it to recover,” she said.
Arrested on the inaccuracy of his answer, Grok says: “I do not propagate fake news; I rely on verified sources”. Therefore, He ends up admitting his error but he reproduced it the next day. However, in answers to new questions of user of X.
In Belgian banks. insurance companies, “AI is no longer a futuristic perspective”
“Black boxes” – "is it photo gaza?" :
This sequence illustrates the limits of artificial intelligence tools, which work as “black boxes”, underlines Louis de Diesbach, researcher in ethics of technique and author of Bonjour Chatgpt.
“We do not know precisely why they give this. that answer, nor how they prioritize their sources,” said the expert, explaining that these tools have biases linked to their training data but also to the instructions of their designers.
XAI’s conversational robot. Elon Musk’s start-up, presents according to the “even more pronounced bias and which are very aligned with the ideology promoted, among others,” by the South African billionaire, close to the ideas of the radical American right.
To question a chatbot on the origin of an image amounts to bringing him out of his role. points out M. de Diesbach: “Typically. when you are looking for the origin of an image, it can “is it photo gaza?” : say +this photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in almost any country where there is a famine +”.
“A language model does not seek to create exact things, that’s not the goal,” insists the expert.
Recently. another AFP photograph, of the same Omar al-Qattaa, published by the daily Liberation and already showing a child suffering from malnutrition in Gaza, had already been falsely located in Yemen and dated 2016 by Grok. While it was taken in July 2025 in Gaza.
AI’s error had led Internet users to wrongly accuse the manipulation newspaper.
“With AI. human skills become even more important”
Not just grok
AI biases are linked to their training data, which condition the knowledge base of the model, and to the so -called alignment phase, which determines what the model “is it photo gaza?” : will consider as a “good” or “bad” response.
And “it is not because it was explained to him that in fact. it was false, that overnight, he will (change his answer) because his training data has not changed, his alignment either,” adds Mr. de Diesbach.
The errors are not specific to Grok: questioned by AFP on the origin of the photo of Mariam Dawwas. the conversational agent of Mistral AI – start -up which, as AFP partner, can integrate the agency’s dispatches to the responses of his chatbot – also wrongly indicated that she had been taken in Yemen.
For Louis de Diesbach. conversational agents should not be used to check facts, like a search engine, because “they are not made to tell the truth” but to “generate content, whether true or false.”
“You have to see him as a mythomaniac friend: “is it photo gaza?” : he does not always lie. but he can always lie,” concludes the expert.
Further reading: Trump suggests that JD Vance could be his successor for 2028 – “Their lives only holds in a thread”: the agency France Presse denounces the fate of its hungry journalists in a besieged Gaza – Washington applies a new visa policy for Malawi and Zambia – Haiti – News: Zapping… – “The man who has taken millions of lives and destroyed mine”: Putin’s “hidden girl” is expressed for the first time about his father.