clock menu more-arrow no yes mobile

Filed under:

How airport scanners discriminate against passengers of color

Full-body scanners often have trouble reading thick hair and certain head coverings — contributing to racist profiling.

The logo patch of a Transportation Security Administration official.
A Transportation Security Administration official as he works at the automated screening lanes at Miami International Airport on October 24, 2017.
Joe Raedle/Getty Images

For years, black passengers at airports across the United States have said Transportation Security Administration agents often pull them aside for so-called “hair pat-downs,” in which agents run their fingers through passengers’ hair, allegedly to look for explosives or other potential security threats. Some say this amounts to racial discrimination by the TSA and that black passengers who wear their hair naturally — or who wear it in styles typically associated with black culture, like braids or dreadlocks — seem to be disproportionately targeted.

A new report by ProPublica suggests that the full-body scanners that have become commonplace at airports across the country are at least partly to blame. These scanners have trouble identifying thick hair and certain head coverings, according to the report. In other words, the very machines that are supposed to determine whether a passenger poses a potential security threat to an airport weren’t designed with people of color in mind.

Full-body scanners, also known as millimeter wave machines, are capable of detecting non-metallic objects, according to the ProPublica report, but aren’t quite advanced enough to detect what those objects are. That’s where TSA agent pat-downs come in. The result: Scanners reading thick hair as an unidentifiable, potentially dangerous object, forcing passengers to endure inconvenient, often embarrassing hair pat-downs.

In a statement to ProPublica, TSA said it was “reviewing additional options for the screening of hair” but emphasized that screenings are “conducted without regard to a person’s race, color, sex, gender identity, national origin, religion, or disability.” TSA dictates that agents have the discretion to pull passengers aside for pat-downs if “an individual’s hair looks like it could contain a prohibited item or is styled in a way an officer cannot visually clear it.” (TSA did not immediately respond to Vox’s request for comment.)

But even if the screening process wasn’t designed with discriminatory intent, many black passengers, as well as those who wear certain religious head coverings like hijabs or turbans, feel profiled. The scanners simply weren’t designed to take certain forms of self-presentation into account — and that oversight is resulting in racist profiling, which is a problem regardless of whether it was intentional.

“When that discretion comes into play, unless there is explicit- and implicit-bias training, that can play out in a way that harms people of color, black people,” Abre’ Conner, a lawyer with the ACLU of Northern California, told ProPublica.

This isn’t the first time seemingly “objective” technologies have turned out to be discriminatory against people of color.

In 2015, two guests at an Atlanta Marriott filmed a now-viral video that documented the soap dispensers in the hotel’s bathroom not working for black customers. “I wasn’t offended, but it was so intriguing, like, ‘Why is it not recognizing me,’” T.J. Fitzpatrick, the guest who filmed the video, told Mic at the time.

More recently, and more troublingly, a study found that self-driving cars may have trouble detecting people with darker skin tones, meaning that the cars could potentially fail to stop if a dark-skinned pedestrian crosses its path. Facial recognition technology has repeatedly been proven to have higher error rates when attempting to identify darker-skinned and female faces. The problem is so widespread that earlier this month, a group of AI researchers from Google, Facebook, Microsoft, and several top universities signed an open letter urging Amazon to stop selling its facial recognition software to law enforcement agencies, claiming it could put women and people of color at risk.

The issue isn’t that the technology itself is racist — instead, as Morgan Klaus Scheuerman, a PhD student at the University of Colorado Boulder and one of the signatories of the letter told The Verge at the time, these technologies “are reinforcing human biases” and perpetuating inequality as a result.

In the case of the TSA, biased technologies can lead to longer lines, invasive pat-downs, and missed flights — and most troublingly, to discrimination against passengers of color.

Want more stories from The Goods by Vox? Sign up for our newsletter here.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.