New advances in the detection of bias in face recognition algorithms

A team from the Computer Vision Center (CVC) and the University of Barcelona has published the results of a study that evaluates the accuracy and bias in gender and skin color of automatic face recognition algorithms tested with real world data.

Although the top solutions exceed the 99.9% of accuracy, researchers have detected some groups that show higher false positive or false negative rates.

Face recognition has been routinely used by both private and governmental organizations worldwide. Automatic face recognition can be used for legitimate and beneficial purposes (e.g. to improve security) but at the same time its power and ubiquity increases a potential negative impact that unfair methods can have on society (e.g. discrimination against ethnic minorities). A necessary, albeit not sufficient, condition for a legitimate deployment of face recognition algorithms is the equal accuracy for all demographic groups.

With this purpose in mind, researchers from the Human Pose Recovery and Behavior Analysis Group at the Computer Vision Center (CVC) – University of Barcelona (UB), led by Sergio Escalera, organized a challenge within the European Conference of Computer Vision (ECCV) 2020. The results, recently published in the journal Computer Vision—ECCV 2020 Workshops, evaluated the accuracy of the submitted algorithms by the participants on the face verification task in the presence of other confounding attributes.

“[The study] attracted 151 participants, who made more than 1,800 submissions in total, exceeding our expectations regarding the number of participants and submissions,” explained Sergio Escalera, also member of the Institute of Mathematics of the UB.

To read the full article, click here.

Atomium-EISMD