ThoseFacial recognition –
They learn to say “forget that” at the AI of cameras
In Neuchâtel, the CSEM forces artificial intelligence models to prevent video surveillance from keeping the faces captured.
Surveillance camera photo (at g.) And the rebuilt versions it is confronted. One supplied by a classic AI model, which in spite of itself certain identifying features (in the center). The other treated with the CSEM method, which prevents any faithful image reconstruction.
Pile
- The CSEM leads to the artificial intelligence models used by facial recognition in order to make them forget personal data.
- “We ask him to forget the information that is not relevant to his task,” says Nadim Maamari, group leader at CSEM.
- “The problem remains just as legislative as it is technical,” tempers Christophe Remillet, specialist in the OneCisage company, in Lausanne.
Counter-intuitive. Forcing an algorithm designed to learn to always recognize a face to ignore the details. However, this is what the Swiss Center for Electronics and Microtechnic (CSEM) has just announced in the middle of the week, with models of artificial intelligence that “forget the faces”.
A presentation that comes when the general public has never been so worried about the generalized surveillance to which it is subject, on the net as in public space. And while this Friday in Geneva ends the United Nations Conference Ai For Goodwhich attempts to mobilize this technological revolution at the service of the common good.
Revolt against Chinese coping
Already omnipresent – to unlock your phone as in airports or hospitals – facial recognition raises major concerns about privacy and consent.
Four years ago, the Swiss claimed its national ban on surveillance purposes. Geneva has prohibited for more than a year any mass biometric monitoring, whose use was also dismissed, in early May, by the municipal council of Lausanne. And as in the European Union, which finalizes its AI Act, The use of continuous identification systems in public space can only take place in the context of police operations. And remains prohibited in the absence of legal justification.
Hands in the engine of AI
The training method of AI models developed by the CSEM Edge AI and Vision Systems group aims to “support this regulatory framework”. Making sure that AI does not keep sensitive personal data. This reformation of artificial intelligence is based on a so -called “antagonistic learning” strategy, which manages to detect when the system tries to keep information that it should not.
“For an AI model to be really useful in sensitive and very busy environments such as stations or hospitals, it must focus only on the elements necessary for its mission – for example, detect abnormal behaviors – without taking into account personal information such as skin color, gender or age”, explains Nadim Maamari, group leader at CSEM. “Our innovative approach guides AI to make it forget these sensitive data from the learning phase – which not only helps preserve privacy, but also to develop systems that are not biased by the data on which they have been trained,” continues the latter.
During his tests, Nadim Maamari compared two versions of the same AI model. One conducted in a conventional way, which in spite of itself certain identifying features. And another treaty with its method, which prevents any reconstruction of faithful image.
Verification against identification
“The systems of the type of those put forward by the CSEM respond to the fear of the population to be traced, permanently freaked – even if in reality, the problem remains just as legislative as it is technical”, tempers Christophe Remillet, specialist in facial verification at the head of Onevisage, in Lausanne. This company uses biometrics to teach a machine to check the recorded face of a person who presents himself for authentication, for example near a secure area. And puts this verification in opposition with large -scale identification.
“Identifying faces one by one in a crowd is an aberration, in terms of preservation of anonymity obviously, but also in terms of security, to see error rates and scams on facial payment systems”, points out this veteran in the sector. Which recalls that “surveillance by facial recognition necessarily means the storage of photos on a database – which the Chinese authorities do, the GAFAM (Editor’s note: American net giants)not to mention the New York company Clearview, which collects billions of photos on the networks in order to resell its facial recognition services to fonts around the world-with impunity, “blows Christophe Remillet.
“Latest news”
Do you want to stay at the top of the info? “Tribune de Genève” offers you two meetings per day, directly in your e-mailbox. To not miss anything of what is happening in your canton, Switzerland or in the world.
Other newsletters
Did you find an error? Please report it to us.