For some time now, Meta has been offering American Facebook users to draw directly from the photo gallery of their smartphone to suggest IA modifications of their images. The fear is that Meta takes the opportunity to train her AI from photos that have not been actively shared on the social network.
Like Google Photos that regularly suggests editions and photo changes recorded in the smartphone, Meta undertook to do the same for Facebook. For the past few weeks, social network users have been able to obtain content recommendations to be shared from images that have not been posted. It can be montages, “makeover”, memories …
What could go wrong?
It is up to the user to give their consent to activate this new “Cloud Processing” function (see this Facebook support sheet). Then Meta can analyze the photos, faces, dates and objects present on the images stored on the smartphone. Unlike Google which guarantees not to use personal photos to train its AI, Meta was very vague on the real use that will be made of its access to the photos.
Since the change in its conditions of use of June 23, Meta has suggested that these images could be exploited without a clear framework. Another disturbing element: even if the group claims to consult only the last 30 days of the film, it admits that suggestions can concern older images (for example around themes such as weddings or pets).
Fortunately, it is possible to deactivate the automatic sending of unpublished photos in the settings, which also leads to their deletion of the cloud after 30 days.
The company ended up clarifying with The Verge that the photos are not used to train its AI models. But she refuses to say if it could change in the future. It is very difficult to give Meta the benefit of the doubt, the group’s disastrous reputation in terms of respect for privacy is no longer to be done.
🔴 To not miss any 01net news, follow us on Google News and Whatsapp.
Source :
TechCrunch