Recognize a pan whose outline is half erased or a hat outlined in a few lines seems natural to us, for us humans. This faculty is called “integration of contours”: our brain completes the forms to give them meaning. But in this game, even the most powerful AIs do not (yet) manage to compete with us, has just demonstrated the EPFL.
The EPFL Neuroai Laboratory, led by Martin Schrimpf, in collaboration with the Laboratory of Psychophysics by Michael Herzog, compared the performance of 50 humans to those of more than 1000 artificial neural networks. Their task: recognize usual objects whose contours had been partially erased, sometimes up to 65%. Result: humans have guessed just in 50% of cases, even with few visual signs. The AI have often lost themselves in random assumptions. “Only the models trained on billions of images got closer to human performance and, even then, they had to be specifically adapted to the images of the study,” said the EPFL.
By digging more, researchers have also identified a natural “integration bias” in humans. Clearly, we instinctively bring together fragments that point to the same direction. By integrating this bias into their models, AIs have improved their precision.
These results, presented at the International Conference on Machine Apprenticeship (ICML) of 2025, therefore suggest that the integration of contours is not an innate characteristic, but that it can be learned from experience. For sectors such as autonomous vehicles or medical imaging, creating AI with a more “human” vision could lead to safer and more reliable technology.