Tuesday, June 24, 2025
HomeWorldIsrael-Iran: a turning point for the disinformation generated by IA

Israel-Iran: a turning point for the disinformation generated by IA


Under a thick cloud of dust, ripped skyscrapers seem to collapse in a chaos of broken glass and pulverized concrete. The city is in ruins. “Hello Tel Aviv”, can we read in the publication that accompanies the photo seen more than 39 million times on the social network X.

If Iranian missiles have really caused destruction in Israel in the last week, the image described above is entirely false. It was generated using artificial intelligence tools (AI), but at first glance, it seems completely real.

When you scrutinize the details, one realizes that one of the buildings blends in the background and that the text on another of them is distorted, two signs revealing an image generated by IA. Without mentioning that it was initially posted in the form of a video by a Tiktok account which exclusively publishes IA content.

But many Internet users, who probably observed it for a few seconds by scrolling on a small cell phone screen, believed it.

Israel-Iran: a turning point for the disinformation generated by IA

This chirping has been seen more than 39 million times.

Photo: screenshot/x

This is only the tip of the iceberg, and it is not even the most realistic artificial image to become viral on social networks since the climbing of hostilities between Israel and Iran.

A false spectacular but convincing photo (new window) which shows missiles falling at night on Tel Aviv was, for example, seen 27 million times on X. It was first published on Facebook by a surfer who indicated that it had been generated by IA. The image also contains an invisible watermark which reveals that it has been generated using Google AI tools when doing inverted research (new window) on the search engine.

Certainly, the last days mark a turning point for the disinformation generated by AI – both by the quantity of false images broadcast as by their realism, although they are still imperfect.

I think that last week, this is the first time that Full Fact has checked more content generated by artificial intelligence than real media out of contextsaid the head of the AI ​​to the media of verification of Full Fact facts, Andrew Dudfield, in a publication on LinkedIn (new window). It seems striking to me. The contents emerging from their context dominated the landscape for so long.

Hyperrealist videos

Since the advent of social networks, the first hours following a major last hour news event have been marked by an informational vacuum that fills up with rumors and decontextualized images. Subsequently, the pendulums are set up on time by Internet users, journalists and authorities.

But since the tools of generative artificial intelligence have become more accessible and more hyperrealistic, there has been a constant increase in content generated by AI during last hour news eventsestimates Emmanuelle Saliba, a former visual investigation journalist now responsible for investigations in Getreal Labs, a technological company specializing in detection and the fight against malicious synthetic content.

I would not say that it is new, but it is the first time that we have seen it as much used in the context of a war.

According to Ms. Saliba, other recent current events marked by images generated by AI are the passage of Hurricane Milton (new window) in Florida in 2024 and the fires of January 2025 (new window) in Los Angeles. Since then, not only the artificial content has become more realistic, but a new video generator more efficient than its competitors was born: Veo 3, from Google, launched at the end of May, also capable of producing synchronized sound in the image.

A false video (new window) of a missile which strikes a building in Israel created by Veo 3 was first disseminated by an Iranian state media, Tehran Timesat the start of the week. If the original publication of Tehran Times contained a watermark in the lower right corner of the video clearly indicating that it had been generated by Veo, several accounts took it back by cutting the image so that it disappears.

We see a missile hitting a building.

This video generated by IA was initially posted by the Iranian state media Tehran Times.

Photo: X screenshot

Getreal Labs has been able to quickly identify the multiple videos generated by VEO which have been circulating on social networks in recent days because the laboratory has collaborated with Google for the development of Synthid, an invisible watermark technology that marks the content generated by Veo. However, Synthid detection tools are not yet available for the general public, and no release date has yet been announced.

For the average person, I would advise to simply be skeptical of everything we see. Some content contains visible filigranes, but otherwise, we must really do more awareness campaigns so that people know how much technology improveswarns Emmanuelle Saliba.

Another downside: it is not all generative AI technologies that contain watermark or even safeguards. If Veo has made it possible to generate realistic images of missiles that strike buildings, he cannot, for example, produce violent and bloody images.

When public or open source models will be able to produce such realistic content, it will be almost impossible to have strong safeguards that people cannot produce problematic contentpredicts the director of the observatory of the media ecosystem of McGill University and the University of Toronto, Aengus Bridgman.

Technology is advancing quickly, but it is not yet perfectadd M. bridgman. But I’m afraid to see what VEO 4 will look like.

aurora.bell
aurora.bell
Aurora shares parenting tips, child development insights, and family-friendly activities for parents looking to make the most out of everyday moments.
Facebook
Twitter
Instagram
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

- Advertisment -

Most Popular

Recent Comments