Skip to main content

Exploring the ethics of AI: Can we use tools like ChatGPT consciously?

As adoption of AI tools speeds up on campuses worldwide, students, faculty, and staff may be tempted to simply adopt-and-go. But it pays to consider the ethical implications of how we approach such technologies.

Profile of a white man with short brown hair and a beard. He is wearing glasses and a blue dress shirt.

Nikolaus Klassen, business analyst at Google, teaches Applied AI Ethics for undergraduate and graduate students at the ATLAS Institute. With a PhD in classics and work in data processing and reporting, Klassen’s career has zigzagged between the humanities and the tech world.

We discussed the ethical implications of AI tools and how students are thinking about them. This conversation was lightly edited for space and clarity.

If you were to distill the concept of AI ethics to a few major themes in our current moment, what would they be?

I think AI ethics specifically—and tech ethics more generally speaking—is often presented as a trade-off: You can use this tool for free, but we'll invade your privacy. For me that's the core of the problem, because very often it's not easy to break out of this trade-off. 

Do you look at utilitarianism, at the consequences, or do you set up unbreakable rules? Again, it’s almost like a trade-off. 

So my core approach to AI ethics and tech ethics in general is: How can we ask better questions and find better frameworks that will bring us beyond this simple trade-off between the good and the bad? 

Is there a way to offer people better choices and to offer choices in a way that [helps us] make good decisions? Instead of letting our privacy be invaded all the time and giving away our data because the defaults are set up in a certain way, how can we dig deeper and find more root causes of bias in the data? 

For me, ethics is more about how can I use these frameworks to expose structural problems and maybe make them better? Alleviate the problems or solve them where possible, rather than just accept that they're part of this bad trade-off.

Key ethics concepts
  • Utilitarianism - The theory that the most moral action is the one that maximizes good and minimizes suffering for the greatest number of people.
  • Deontology - The theory that there are absolute moral obligations that must be followed regardless of consequences, exceptions, or potential benefits.
  • Moral licensing - A phenomenon in which people justify an immoral action after having previously done something good.
  • Law of the instrument - A cognitive bias toward over-reliance on a familiar tool for solving problems, regardless of suitability.
  • Choice architecture - A deliberate design of a tool or environment that influences how people make decisions without directly restricting choice.

Why do you think your AI Ethics class is so popular among ATLAS and non-ATLAS students?

I think students are pretty concerned about AI. Is it going to take away all the jobs? It seems to for entry-level jobs, so there is a direct impact. And I see students honestly grapple with how they should use AI in their own studies.

People frame it as: Is AI my crutch or is it a good tool that I'm using? 

It's not like this is an abstract academic phenomenon. If you're going through your surroundings with open eyes, you can see bad impacts of unethical AI usage, so I think this is very concrete and applicable for students. 

What do you hope students take away from spending a semester considering the ethical implications of AI technology?

For me it's really all about the questions—I want students to have a toolbox of questions they can ask and to teach them when they see a phenomenon not to just take it at face value. Be it a technology, an app, a use case, whatever their friends are using. To say, “Hold on a minute, let me ask some questions here,” and give them good questions to ask. To say, “How can I dive deeper into a problem?” and understand the root cause or the assumptions that are hidden here and sharpen these analytical tools to cut through the noise. 

How do you think about AI in general? A tool? A platform? A way of life?

As humans, we experience these gateway transitions where we change something and then open up a new world. Agriculture enabled cities and civilizations and division of labor with all the bad and all the good [associated with that]. We suddenly could finance full-time poets and musicians and spend more resources on meaning making and culture.

Then you have the mechanical engine and the revolution that came with it. We have a lot more mobility today. We don't have to work so hard. Our life expectancy has basically doubled since then. It has enabled all kinds of different ways of living.

Nikolaus Klassen in front of a screen that says Purpose (How), Goal (What), Means (How)

I think AI is probably going to be the same. The amount of information that we have in the world today is far beyond what humans can process. Because there's so much information around, it's hard to cut through it. For better or worse, we need technology to help us process it. We cannot do so on our own anymore. I think this will be the next gateway. 

Most likely we will go through a valley like we did after the agricultural revolution and the mechanical revolution with unemployment rising or people being more and more hooked on digital technology. I feel like this is happening whether we want that or not.

The speed of change feels unprecedented. How does ethics apply to a phenomenon that is evolving so quickly?

I don't think it's ever going to be too late to make AI more ethical. If you think about the industrial revolution, the life of workers got so much worse when they started to work in the factories than it was when they were working in the fields. It took 50 or 100 years or so to rectify that. And within that comparatively short time span, the life of workers was better than the life of farmers. And we probably have stronger social ethics today than we had in the 18th century, so I don't think it's impossible for AI to do that. I would expect it to happen.