By Lisa Marshall (Jour, PolSci’94)
With a brief glance at a single face, emerging facial analysis software can now categorize the gender, race, emotional state and even identity of individuals with remarkable accuracy much of the time.
Such technologies enable us to do everything from logging in to our smartphones to finding matches on dating apps. And increasingly, airport security and law enforcement agencies deploy them for surveillance.
But along with their burgeoning use have come some troubling questions: How often do these systems get it wrong? Why? And what’s the potential human toll?
Information science PhD student Morgan Klaus Scheuerman set out to find answers and discovered what he views as a dangerously subtle form of systemic bias.
“The fear that a lot of people have had over this technology is being realized,” Scheuerman says.
In one real-life June incident, Detroit police arrested a Black man named Robert Williams in front of his children after a facial recognition service matched his driver’s license photo to a still image from a security video of a shoplifting incident. When asked by the police if the image was him, he responded: “No. You think all Black men look alike?” All charges were eventually dropped.
“We have labor laws and advertising laws and housing laws against racial and gender discrimination, but there are no laws specific to embedding discrimination into algorithms,” Scheuerman says. “The only way we discover discrimination is if it happens to us.”
Previous research shows that, while facial recognition software is remarkably accurate when assessing the gender of white men, it misidentifies women of color one-third of the time.
In a gender-focused follow-up study published in Proceedings of the ACM on Human Interaction, Scheuerman found that although the systems are adept at identifying cisgender women (those assigned female at birth and identifying as such) or cisgender men, they falter when faced with people who don’t fit those categories.
If a face belongs to a trans man, the systems misidentify him as a woman 38% of the time. Presented with a person who identifies as neither male nor female, the systems, as expected, get it wrong 100% of the time.
“While there are many different types of people out there, these systems have an extremely limited view of what gender looks like,” Scheuerman says.
In another paper, Scheuerman pored over 92 image databases to learn how algorithms are “trained” to know the difference between, for instance, a Black face and a white face. In most cases, he explains, computer scientists or volunteers make the call, bringing their own biases to the table as they label images.
“Databases are supposed to be these useful, transparent tools for creating facial analysis models. But right now, the decisions being made around race and gender are so opaque and so inconsistent they are unreliable at best and dangerous at worst,” Scheuerman says.
The now widespread use of masks will likely present additional challenges to the accuracy of facial recognition. Future studies will need to address what effect this will have.