Published: Feb. 18, 2020

Paul Beique: 
Welcome to Brainwaves, a podcast about big ideas produced at the University of Colorado Boulder. I'm your host this week, Paul Beique. There's no shortage of news coverage of artificial intelligence. From Amazon, to China, to robotic relationships.

Announcer: I have an appointment with Harmony, the world's first sex robot. “I am already taking over the world one bedroom at a time.” 
Paul: 
But if we've learned anything from the movies, artificial intelligence might have a few downsides. 

From “2001: A Space Odyssey”: 
“Open the pod bay doors, HAL.” 

HAL: 
“I'm sorry, Dave, I'm afraid I can't do that.” 

Dave: 
“What's the problem?” 

HAL: 
“I think you know what the problem is just as well as I do.”

Dave: 
 “What are you talking about, HAL?” 

Paul: 
That's HAL, the robotic antagonist from “2001: A Space Odyssey.” What HAL's talking about are the unforeseen limitations of AI. Let’s start there this week. Executive Producer Andrew Sorensen talked with research scientist Janelle Shane about AI’s shortcomings.

Andrew Sorensen: 
Janelle Shayne, author of the book “You Look Like a Thing and I Love You.” This book is about artificial intelligence and specifically about some of the gaps in what artificial intelligence can't do. What can artificial intelligence not do? Where are we limited right now?

Janelle Shane: 
We are limited to really simple, well-defined problems. Because the algorithms we’re dealing with actually, if you look at raw mental power, it’s more along the lines of what an earthworm can do. So trying to understand the broader world, the context like that, is a really hard thing for today's algorithms to do.

Andrew: 
Tell us about the title of that book, “You Look Like a Thing and I Love You.” Where did that title come from.?

Janelle Shane: 
That was an experiment when I was trying to get a text-generating neural network to generate pickup lines, and this was the best it did. 

Andrew: 
That was the best pick up line it came up. So, in your mind, what does that kind of show us as far as where we're at with artificial intelligence, and what do we need to keep in mind, as there are a lot of people out there who are very worried about what artificial intelligence can do, and then on the other side we're seeing a lot of companies sell these artificial intelligence solutions to a lot of the problems that we face. 

Janelle Shane: 
Yeah, so, I think a lot of people tend to think of AI as a kind of science-fiction level AI like Skynet level and so forth. And what we have today is a lot less complicated than that. It's unlikely to get complicated anytime soon. In fact, what we have to worry about a little bit more are algorithms that don't really understand what we're trying to get them to do and accidentally solve the wrong problem, copy human bias when they shouldn't copy human bias, or, you know, goof up and not recognize a pedestrian when they should.

Andrew: 
How far off do you think we are with artificial intelligence from something that can be maybe a little more reliable? 

Janelle Shane: 
It depends. I mean, you can build something that's fairly reliable now at certain tasks. Like we've got them tagging photos in our cell phones, we’ve got them delivering search results, doing autocomplete. And so they work well for a lot of different tasks, but what we're running into is there are some things that we don't realize are very difficult like, you know, flexibly answering a customer's questions, until we try to build a machine to do it and realize, oh, there's a lot of complex stuff that we humans are doing without even thinking about it. 

Andrew: 
So to that end, what is your advice to people as they think about AI and as they are being hawked a lot of products that have artificial intelligence solutions behind them?

Janelle Shane: 
Yeah, I think it’s to remember that these algorithms can't make moral decisions by themselves and that they copy human behavior. So if the human behavior is flawed, these algorithms will copy it unknowingly.

Andrew: 
OK, thank you so much.

Janelle Shane: 
 Thank you so much. 

Paul: 
Janelle Shane is a research scientist at Boulder Nonlinear Systems, and she's the author of a book on artificial intelligence called “You Look Like a Thing and I Love You.” 
It's true that machines can only do what people want them to do, and they can't decide what is ethical and what is not. When that machine is designed to kill, the prospects become very frightening very quickly. The New York Times recently published a story and a documentary on AI’s increasing role in the military. Our next guest is Jonah Kessel, the Times’ director of cinematography. He produced the documentary. It starts with a shot of Kessel sitting in an orange leather chair against a stark black background. He's looking at his phone. 

Jonah Kessel narration: 
I love that I can unlock my phone with my face. And that Google can predict what I'm thinking. And that Amazon knows exactly what I need. It's great that I don’t have to hail a cab or go to the grocery store. Actually, I hope I never have to drive again. Or navigate, or use cash, or clean, or cook, or work or learn. 

But what if all this technology was … trying to kill me? 

Paul: 
Jonah, welcome to Brainwaves. 

Jonah Kessel: 
Thanks for having me. 

Paul: 
And, full disclosure, Jonah was a student of mine years ago at Saint Michael's College outside of Burlington, Vermont. In the documentary, you travel to a Russian arms expo where some AI-equipped weapons are on display. Did anything surprise you about what you found there? 

Jonah Kessel: 
I think the thing that surprised me most was that they were kind of, first, showing them at all. Some of these weapons are considered in a morally gray area and specifically at the Kalashnikov booth. Kalashnikov, as you know, is like a world-famous icon of killing. You know, their guns are — the AK-47 is probably one of the most infamous guns in the world. And when we saw this gun that they had there, it was a turret hooked up to facial recognition software. And it took me a minute to understand it. I was like, huh. It was on display and I saw the turret, I saw the machine gun on it. And I saw what it was hooked up to, and it was pointing at me. And after it kind of registered what was going on, I was like, wow, this is amazing. And so I immediately went, you know, to the sales people and to their PR people. I was like, hey, can we ask you about this weapon? We’re really curious how it works. 

And, you know, they chatted for a second and eventually they were like, “Absolutely not. Go away.” And we’re like, well, you know, we have press passes. We’re here you know as legitimate members of the press. You know, we’d really like to speak to somebody about this. And they said, well, you know, come back in an hour. And we came back in an hour, and they said come back tomorrow. And we came back tomorrow, and then it was gone. 

So this weapon, which clearly drew our interest, as soon as we started asking about it, they didn't feel comfortable enough let alone talking about it, but that they felt the need to put it away. And I think that speaks volumes to its perceived threat and its perceived use. 

Paul: 
You make the point in the documentary that we've been here before with military technology. The Gatling gun was originally designed to save lives. Nuclear and chemical and biological weapons were supposed to be a deterrent but they've all been used to kill people. Do you see this piece as a kind of warning not to go down this same path again with artificial intelligence? 

Jonah Kessel: 
Yeah, absolutely. I think, you know, one of my passions in journalism is to raise red flags. You know, this piece is more on the analysis side than anything else. But it's not up to us as journalists to say this is good or bad. But it certainly is up to us to say, “Hey, this needs more attention.” And if we look at lessons from the past, you know, such as the Gatling gun, clearly our inventions don't always have the intentions we predict. And in the case of autonomous weapons, you know, the AI scientists are screaming don't do this, and yet we are not listening to them. 

Paul: 
The United Nations does not come off particularly well in this documentary. They’re portrayed as talking in circles about definitions and rules while tech companies and nations are rapidly developing autonomous weapons. What struck you about that dichotomy? 

Jonah Kessel: 
Yeah, so I went to, I think it was five days of meetings at the United Nations in Geneva. But when I was in there I started noticing it was the pleasantries which first started to really get to me. That, you know, whenever someone started talking, the first 30 seconds were, you know, left for pleasantries: Thank you for having us, thank you for letting us speak, Your Excellency, you know. And the same thing would happen in return with the chair would talk. And the amount of time that was being wasted kind of, you know, shaking each other's hands, if you will, really started to be in juxtaposition to what I had seen the previous couple day at that weapons fair. I’d been talking to technologists and developers about all this stuff and all the things they're working for, and all of a sudden you show up at the, you know, the highest level of international governance. And people are just thanking each other. And the scene started to build in my head while I was there in the meetings. I could see what I want to do with it and how to kind of juxtapose these things to kind of show we're not acting fast enough and certainly not at a bureaucratic level. 

Paul: 
Children actually play a pretty significant role in this documentary. You show several scenes of children examining, almost playing with, these weapons systems. For me, those were some of the most poignant scenes. What was the thought behind including children in a story about AI in the military? 

Jonah Kessel: 
Certainly, when thinking about the future, there’s probably no more potent sign than children. They're also a symbol, or they acted, I was intending them to act as a symbol of cultural differences. So, this is in Russia, and I think this is an important part of the story, which is a little bit subtle, but that we just don't all have the same values. And that can be pretty tricky if, you know, let’s say the United States, we’re having these conversations about ethics as it relates to weapons, but those same conversations aren’t necessarily happening in other places. And if our value systems are so different, perhaps one country will make these weapons, whereas another won't because of, you know, their own values. And that creates a kind of an unevenness to warfare, which could potentially be dangerous and actually is one reason why people don't want to stop making these things, because they're afraid if they stop making them, their competitor or their adversary might continue to make them, giving them an advantage should they go to war. 

Paul: 
One of your subjects makes the point that we really don't have to wait for this technology, it’s already being created by commercial tech companies. He also says that we can teach military machines to be legally right but getting a morally right is a lot more difficult. Can you tell us about the example he used. 

Jonah Kessel: 
Yeah, so, Paul Scharre is a former Army Ranger and he became a policy guy in the end. He's in a think tank in D.C. And in the story, Paul describes a young girl — she could have been 4 or 5 years old — that was spying on him and his teammates. And you know, by the rules of law, by the rules of war, she was a valid enemy combatant. And the rules of war don't have an age limit on who’s a combatant, so she was a valid target. And his point he makes with this young girl he sees in Afghanistan is that, should he have been a machine programmed by algorithms to follow the rules of war, that machine would have shot this little girl. Now, he knew that was wrong and he didn't shoot her. But a machine, could you program a machine to know the difference between right and wrong, even if that means breaking the law? And I think there's something, a couple really interesting points here. One is certainly that what's right and wrong is not always clear. Another thing is that sometimes doing the right thing means breaking the law. And a third thing is just the uncertainty that is required for judgment. Paul once told me: The entire time I was in Afghanistan, when someone came up to talk to me, I could never be totally sure if this person was just a civilian who wanted to say hi, or maybe they didn't understand me, or maybe it was actually someone who wanted to kill me. And I was never quite certain. 

And this is the reality of modern warfare today. You know, we're not living in, you know, in World War II or World War I times where you could identify your enemy by their helmet or by their uniforms. War is much different now, and the battleground is not as clear. So these are real challenges for AI, if we think about making machines that are going to carry out warfare and follow rules, because they're all going to be governed by rules which we give them. 

Paul: 
Jonah, thank you very much for your work and thanks a lot for joining us today on Brainwaves. 

Jonah Kessel: 
Great, thanks for having me. 

Jonah Kessel: 
Jonah Kessel is the director of cinematography at The New York Times. You can find links to the documentary in the podcast description. 

Paul: 
Facial recognition as part of a weapons system might sound frightening but even the facial recognition in phones and on Facebook can have a hard time figuring out who we are. Executive Producer Andrew Sorensen discussed the weaknesses of facial recognition — particularly around gender identity — with Morgan Klaus Scheurman, a PhD student in information science at CU Boulder. 

Andrew: 
Morgan Klaus Scheurman, thank you and welcome to the show

Morgan Klaus Scheurman: 
Yeah, thanks for having me.

Andrew: 
So we're talking about artificial intelligence. You've done some research on to artificial intelligence and facial recognition. How commonly used is artificial intelligence in this facial recognition software? 

Morgan Klaus Scheurman: 
Well, I would say that facial recognition and facial analysis more broadly is just an instance of AI, so all facial analysis is AI, I guess I would say. 

Andrew: 
What are we looking at as far as where is this technology currently? In your research you found some pretty serious shortcomings.

Morgan Klaus Scheurman: 
Well, I guess I can say for some listeners that are maybe not as familiar with this technology that maybe facial recognition is the most familiar use case people know. So, how you unlock your phone, how you tag your friends on Facebook. We're all kind of familiar with that instance of facial recognition. But in my research I looked at facial classification. So that’s when a system will analyze aspects of an image, aspects of a face, and then try to classify certain characteristics of that face, including things like gender, ethnicity, age. Those sorts of features.

Andrew: 
And previous research showed that there are a lot of issues around minority groups, particularly women, who have darker skin. Is that right?

Morgan Klaus Scheurman: 
Yes, so, previous research has been done to show that women with darker skin tend to be misclassified as male more often than people with lighter skin types in general. 

Andrew: 
And then what you found in your research, can you explain a little bit of that? 

Morgan Klaus Scheurman: 
So, in my research, I looked at gender across different gender identities. So I looked at cisgender men, cisgender women, transgender man, transgender women, and nonbinary genders such as gender queer, agender. And so we found that facial classification broadly misclassifies trans people far more than it misclassifies cisgender people. And then these systems aren't built to recognize anything beyond male or female so it's actually not possible for it to accurately classify anything outside of the gender binary

Andrew: 
So the population that identifies is trans is still, I think, pretty small, somewhere in the kind of single digits for the whole population. Why should the average person really take stock of that and be concerned?

Morgan Klaus Scheurman: 
Yeah, so, I guess if we're talking about, like, why would any person on the street be concerned is, on one hand, it's really encoding into these systems what a normal woman and a normal man should look like. So it's very limited in the way that it views gender for every person that it comes into contact with. So if you fall outside of that, like maybe you like to wear your hair short more often, or you just kind of have what a computer would see as a more masculine appearance as a woman, you may be misclassified. So I find it very interesting. This is something that I also tested on myself, and when I interact with people in real life, when I talk, generally people will say “he” or “sir,” but these different systems actually classified me differently. So, like, Amazon classified me as female, and Microsoft classified me as male. So you can also kind of see that depending on the system you're interacting with, it might see you as a totally different gender, so it could affect any person, really.

Andrew: 
And you, I know our audience can't see Morgan right now. He does have long hair but he's wearing a flannel and a NASA shirt.

Morgan Klaus Scheurman: 
It really depends on the day. I’m definitely one of those people, too, who’s more maybe genderfluid, as you would say. Like, I wouldn't consider myself as maybe falling into the norm like short hair, wearing jean jackets every day or something. 

Andrew: 
But once you interact with you, it's not unapparent that you're a man.

Morgan Klaus Scheurman: 
Well, I think it's very interesting, too. These systems, they're really trying to be as good as humans are at this, but they don't have as many context clues, and then there's kind of many notions around gender now that your appearance doesn't necessarily map to your gender identity. So me and you, we could have a conversation about that, right, and I could say, well, this, the way you perceive me is not the way that I feel, right? But you can't really do that with a computer, and there are no opportunities right now for you to even intervene in most of these systems. A lot of us don't even know it’s happening.

Andrew: 
And what are some of the problematic kind of use cases were facial recognition is being used where you find what you found with their limitations to be an issue.

Morgan Klaus Scheurman: 
Yeah, so, I would say that the majority of how it's being used as in either media or in marketing. So, in that case it's more about who are you misrepresenting, and who are you erasing from the reality of who's interacting with products, or who’s on screen or whatever. In other cases, facial recognition or facial classification are being used in, like, security scenarios or policing. And usually that's more on the recognition, one-to-one individual matching use case. But it is interesting to see maybe your documentation, your ID, or what's been recorded in a database by police doesn't match your current gender identity. So I could see that as being very problematic and very dangerous for people who already kind of have higher levels of violence generally. Trans people face higher levels of violence in the general population. 

Andrew: 
So we were talking a little bit before this interview, and you’ve had some pretty big companies who were involved in this space reach out to you to learn a little bit more about your research. Who has been reaching out to you and what have they been asking?

Morgan Klaus Scheurman: 
So, I don't know if I want to say which companies have a reaching out to me on a podcast, but basically some bigger tech companies that are using gender classification in their facial analysis softwares have reached out to understand what directions they think they should be moving in, in terms of gender classification, and kind of in many ways talking to us about the use cases that their clients are currently using it for. But I think the companies and the people in the companies actually are thinking a lot about this problem. As society, you know, trans rights and different views of gender are becoming more apparent, I don't think these companies have been unaware of this issue.

Andrew: 
So to that end, that they are reaching out, they are looking at the problem, does that give you some hope for the future of facial recognition that it might be a little more accurate and not create some of these problematic scenarios where, you know, maybe you're being marketed women's products when you identify as a man?

Morgan Klaus Scheurman: 
Yeah, I think that's actually an interesting question, because I'm not necessarily a huge proponent of facial recognition anyway, so in terms of making it more accurate, there is a lot of concern from different marginalized groups, and not just trans people but also people of color, about the more accurate it is maybe the more dangerous it is to those groups especially in terms of policing when surveillance and things like that. I do think that there are some use cases that are promising. So, if we’re looking at representation or bias mitigation using these kinds of tools and seeing, like, oh, how many people of a certain gender are shown in television shows, or how can we mitigate bias against certain people of color, I think that is useful. I personally think that the best step forward is actually in policy and less about diversifying data. So I would like to see more discussion around how these systems should be used and what use cases should be regulated.

Andrew: 
OK, Morgan Klaus Scheurman, thank you so much.

Morgan Klaus Scheurman: 
Thank you so much for having me. 

Paul: 
Thanks for joining us this week on Brainwaves. I'm Paul Beique. If you liked what you heard or have an idea for a topic we should cover, we want to know. You can now email us at brainwaves@colorado.edu. Executive Producer Andrew Sorensen and I produced this episode. Join us next week when the topic is music, from the Beatles to Gen Z. 

Dave from “2001: A Space Odyssey”: 
Hello, HAL, do you read me? 

Hello, HAL, do you read me? 

Do you read me, HAL? 

Do you read me, HAL? 

Hello, HAL, do you read me?