Published: July 24, 2019

LISA: Welcome to Brainwaves, a podcast about big ideas, produced at the University of Colorado Boulder.
I’m Lisa Marshall.
This week: who’s watching you?
In the digital age, there’s more information about you out there in the world than ever before, and it’s getting easier for companies and governments to get their hands on it
What do they really have on us?
What are they doing with it?
And how are new technologies like facial recognition software playing into all of this?
Last week, it seemed like the whole internet turned old for a minute.
A mobile app called ‘FaceApp’ went viral.
Face-app uses artificial intelligence to change the way you look in a photo, and a whole lot of people were using it to make themselves look like a great-grandparent decades before their time.
Then privacy advocates started raising red flags, warning that the Russian-made app could be harvesting photos and other sensitive data from users - for the government there to use with malintent
Around the same time, news organizations began to report that U.S. Immigration and Customs Enforcement-- or ‘ICE’-- was using facial recognition software here to comb driver’s license databases in search of deportation targets.
That got us thinking-- how accurate and reliable is facial recognition? Is it ethical to use it for law enforcement?
And could other governments be using our faces for their own purposes?
Brainwaves’ Paul Beique sat down with Jed Brubaker, a facial recognition researcher and assistant professor of information science at CU Boulder. 

[interview]
PAUL: Jed, welcome to Brainwaves.
JED: Thanks for having me.
PAUL: How widely is facial recognition software actually being used?
JED: It's really hard to give a number, but what we can see when we look at the technical infrastructure that all of our apps and systems are being developed on these days, is that facial recognition is now a central service that's provided on platforms like Google, Amazon and others who really provide the kind of technical infrastructure for the apps you probably have open right now. 
PAUL: So it's widely used, and becoming more widely used. 
JED: We don't have a good way of knowing exactly how widely used it is, but it's becoming more available to developers, so it's able to become a feature or a component of technology in a way that it just wasn't ten years ago.
PAUL: What legitimate uses do you see for this kind of technology?
JED: There's many ways that facial recognition is showing up that we find convenient and helpful in our lives. When we think about facial recognition, we're also often thinking of closed caption television or surveillance scenarios, but anyone who has the latest iPhone is using facial recognition to unlock their phone with their face, and even when you upload a photo onto Facebook and it suggests it thinks it might know who the right people are to tag. Or maybe you're looking for that photo from that wedding from a few years ago and you search for a friend by name these are all examples of facial recognition that at least I find beneficial in my life.
PAUL: You're working on some research on facial recognition and bias can you talk to us a little bit about that?
JED: So in the identity lab we look at how our identities are captured and represented in technology in data. One of the areas that we've been focusing on over the last year has been in the area of facial recognition, this has been led by a really talented PhD student named Morgan Klaus Sherman. We've been looking at how these technologies are made available on our cloud-based infrastructures and what forms of bias might exist inside of them. There's been a lot of work that has looked into kind of the structural problems and how facial recognition operates a lot of this has to do with what's called training data. Facial recognition and in its most simple form is taught to recognize something by being shown it over and over again but what happens far too often is that we only show these systems a certain kind of person. So there's been a lot of talk about racial bias and this then creates problems when we think about the uses of facial recognition in say, policing and surveillance. We've been looking at it from the perspective of gender, gender is in 2019 something that's more complicated than just male or female. What we found is that sure; things were great when you were talking about male or female but these systems do so well when it came to people with trans identities and there are certain kinds of gender identities that actually just aren't even really visible and all computer vision can do is see all facial recognition does is recognize things that can see. If you can't see the thing, if the gender isn't visible well then you can't detect it and so we found these kind of fundamental subtle biases and how facial recognition itself is even conceptualized.
PAUL: So when this software is being used on someone's face, what is it saying about who this person is; in what ways are they being identified what categories?
JED: Yeah I love that you asked this question because this is exactly what our research looked into. Typically there's a there's a variety of different approaches when you're using your iPhone going back to that example it's looking to see if the person looking at the camera is the person it thinks should be. It's trying to tell if it's you, but many of these services in these on these cloud-based platforms they instead are producing attributes from probabilistic age range sometimes ethnicity and race and importantly for our research gender. And something we found really interesting when we looked at four of the five with four the five major competitors out there, we found that gender was defined as male or female and that's it. Google ended up looking really good in our study they actually we found out after the fact that in some meeting they couldn't think of really good reasons to include gender in their facial recognition algorithms and so they opted to leave it out. 
PAUL: When you when you're thinking about facial recognition software what keeps you up at night what is what is a little bit on the frightening side of this technology?
JED: I love my black mirror episodes, but I'm not left awake. I am really encouraged by the fact that this is a topic that's getting so much attention because it means that really high-caliber research can be performed to help us figure out, not only how should this technology work but how should it also be used. So there are some disturbing scenarios it recently came to light that for example China has been using large-scale facial recognition on their populace at large. What's hard to think through in these scenarios is it's always a couple steps removed right, so they were tracking people's faces so then what right? And so what can be as frightening or not as you like. I think it requires that we stay really diligent and we think very carefully about how we feel about facial recognition and appropriately right at this moment we're seeing a lot of both states at both the state level and the federal level people starting to consider well what kind of rights to privacy what kinds of rights to our very faces do we have and what kind of regulation is needed? Because there are good qualities to this technology but that doesn't mean that it shouldn't be regulated, clearly if we have a dystopian China scenario happening well then we should have some guidelines some best practices around which we can all agree are the right ways to use this Technology.

LISA: 
Of course, there’s a whole lot more information out there that a lot of us give up freely.
We wanted to hear from you on how you try to keep yourself safe in the new digital privacy landscape.
So, we sent Brainwaves Cole Hemstreet to find out.
[MOS PKG]
I think overall it's probably a lot less secure than we'd like to think.
 I don't think anything's very private online and safe I don't put anything on there. I play games on my phone that's about it.
 I don't give out my information as far as credit cards or any other information.
 I trust that when I make it password that I'm the only person who knows it but I'm sure that there's ways for other people to find out.
 I don't even log out of my email or anything like that or Facebook it's always on open.
 I use different passwords for everything I use; I write them down, hard copy. 
Yeah I mean they make it really easy to just accept so I mean I just I don't want to read it so I don't.
I’m trying to man, I’m trying to get into it but at the end of the day I can read it on I still don't really understand what it's because it's in... lawyer language.

LISA: 
With all that information of ours out there, how much does the government have? And how do they use it?
Should we be worried about it?
Our next guest is Lloyd Thrall, a scholar in residence at CU Boulder and associate director of the Technology, Cybersecurity and Policy Program.
He used to work in the Pentagon as a deputy assistant secretary of defense.
He’s been in some of those conversations about how and when to use private information from private citizens.
Brainwaves’ Andrew Sorensen sat down with him.
ANDREW: Lloyd Thrall, thank you so much for coming on the show.
LLOYD: Thanks for having me.
ANDREW: As we get into this topic of kind of digital security, there's a lot of information that the government has about US citizens I presume we all vote and file our taxes that's a lot of information right there, I'm sure they have more than that but should Americans be concerned about the types of information, the amount of information that the u.s. government has about them?
LLOYD: If you'll forgive me for being a little bit philosophical, Americans we're a country founded in Liberty and so Americans should always be vigilant about what they share, what the government is doing it this is not a new question in the American experiment so the short answer is should Americans be concerned? Always, that said that concern should be based on real information, factual data and an understanding of the real moral dilemmas at stake in terms of the conversations that I have in these I often find that people would like to maximize privacy and maximize security and unfortunately that's not that's not possible, so the real question is how to hold these two concepts at neither one being an optional question, how to hold these two concepts at their maximum value that we can; in a way that's transparent and in a way that's effective.
ANDREW: Suit to that and what kinds of safeguards are in place within the US government to make sure that our information is safe, that you know any government agency you know the NSA or ICE or whoever it may be isn't overstepping the line there where we're getting into a question of, hold on that's my private information.
LLOYD: Let me back up one half step and answer that and just sort of set some contacts. I mean we would love a world in which we didn't face the kinds of threats that we face, those are the world we're in and the problem is getting worse and it's getting worse in two directions; one I think the threat of terrorism is on the rise so one the number of actors the frequency of potential attacks in terms of the kinds of people who have capability and intent to commit thesekinds of attacks are on the rise and to their operationally more effective and more lethal, particularly in the case of suicide terrorism which is difficult to stop operationally and difficult to deter politically for obvious reasons. So while the threat is higher I think contrary to conventional wisdom in the digital age collection of intelligence is actually more difficult. This used to be a game of centralized telephone operators which had pretty good attribution that you could tap in unencrypted lines and have fairly precise targeting of the Intel you were collecting. In a distributed digital Internet much of which is encrypted, where anonymity is rife and wrongful attribution is easy and it's much easier to generate noise rather than signal intelligence collection is actually more difficult. So here we're in a world where threat of terrorism is up, ease of Intel
collection is down and that's going to have an effect on the American question and again it's not a new one on how we navigate that the relationship between privacy and security. So the context is there and the only other thing I would add to that is you'd also like a world perhaps where we
could play defense in an effort against terrorism but you can't protect every subway, every train station, every plane, every sporting event, every venue, every cafe and second if you tried to do that one the cost would be exorbitant and you would militarized society, I mean be far more intrusive I see this when I travel in Europe look at the Jean Donna Marie in France you'd have a much less open American Society. So we don't have the option of playing defense if you want to address threats you have to predict them if you want to predict threats you have to penetrate terrorist networks and understand what they're doing to max it, again to maximize security and minimize intrusion and minimize cost.
ANDREW: Now when you say we're not playing defense, we talked about this a little bit before this interview but kind of what you're talking about there is no one's watching you walk down the aisle in Home Depot, picking up your garden hose or whatever you have going on there there's not some Big Brother apparatus that's like truly watching the American people. 
LLOYD: There are so many threats domestic and foreign that could be tracked intelligence is a hard game, people who want to do these kinds of operations are often either individual one-offs that are difficult to track and find as the guy in his garage she wants to make anthrax and put it in a shopping mall or they are as in the case of al-qaeda or Isis actors like this EQIP, very subtle and complex and operationally crafty organizations. So with that context and they just proliferation of digital data connecting the dots isn't easy, and one other note you know any sophisticated terrorist organization puts plenty of false traffic into the network. So there's not there's not enough time to try to just go after random Americans email because you're sitting around at the desk and have nothing to do, the intelligence community is quite busy tracking real threats. So one isn't there's an operational reality, the second is I think if Americans could watch the operations of their intelligence community for a day they'd probably be more upset at the amount of risk that we accept rather than the amount of intrusion into American lives. That's a matter of both structure in terms of the laws regulating intelligence collection and the professional ethic of the intelligence community. These are professional people who take this quite seriously you know in training to be an intelligence analyst in the in the counterterrorism side of the intelligence community, great training on the church committee excesses there's a deep cultural indoctrination to what we're meant to do and then the third is the formal structure which oversights and that begins at the agency level goes up your levels of the executive at the
Office of the Director of National Intelligence the management and budget at the White House and national security staff, has a level of reporting to the hypsi and sissie on the hill and then has FISA courts on top of that. So it's a very long-winded way of saying, no one wants to has time or really could watch you wander down the aisle at Home Depot, rather than tracking serious
threats to the public good.
ANDREW: Now let's flip the script a little bit from where we started this interview which is should people be concerned about what the government has on them. Should the government be concerned about what people are doing with their own data? We saw the story last week about people signing away quite a bit of their information to this company face app that's not even based in the United States. 
LLOYD: To my mind at least I think this is far closer to the real issue affecting the security of American citizens. It could be far less concerned on what the intelligence community is looking at your life and far more concerned on how petty criminals, organized criminal networks and to some extent foreign intelligence agencies, although foreign intelligence agencies are not interested in the daily comings and goings of Americans either they have real targets to track. But criminal networks are quite rife and the degree to which people freely, freely submit it's not stolen data it's it's given away through apps like this and I wish I could say that two things one that that people were making an informed choice when they give that I don't think that's true. And the second is that it's easy to distinguish between sort of shady operations where one where caution would be advised and candidly major operations, the fact is that data is shared and the best protection is not a sort of trust candidly in a different corporate body it's being judicious and what you share in any digital domain.
ANDREW:  Lloyd Throl, thank you so much.
LLOYD: All right thanks a lot, cheers!

LISA: 
Join us next week, when we look back at the 45th anniversary of a moment that changed America.
[***nats***]
“as of noon tomorrow…”
LISA: 
I’m Lisa Marshall.
Today’s episode was produced by Paul Beique… Cole Hemstreet and Andrew Sorensen.
Andrew is our executive producer.
Sam Linnerooth is our digital producer.
See you next time, on Brainwaves.