Generating a New Curriculum
As AI enters the classroom, Leeds faculty balance ethics with innovation.
From Google search results to social media, artificial intelligence (AI) is everywhere—and it is here to stay. AI is the Wild West of technology, with few regulations and seemingly limitless applications.
“Since its release in November 2022, ChatGPT and similar language models have transformed nearly every aspect of business,” said Shannon Blankenship (Econ’98), Leeds Advisory Board member and principal at Deloitte Tax. “The technology has dramatically accelerated interest and investment in digital transformation from nearly every industry, the likes of which we haven’t seen since the internet became mainstream in the late 1990s.”
Like any new technology, AI has the potential to revolutionize the way we teach students in higher education, but there are also legitimate concerns about its accuracy and its ethicality. Still, preparing students for a world with AI means equipping them to use it, and at Leeds, faculty and staff are forging ahead to make this happen experientially and ethically.
“It’s going to change things dramatically,” said Dan Zhang, associate dean for research and academics. “I think people, once they realize the potential, are going to be so excited about it.”
Meghan Van Portfliet, teaching assistant professor in the Social Responsibility and Sustainability Division, plans to let students in her fall 2024 World of Business courses use ChatGPT for an in-class debate. Split into two teams, students will craft prompts for ChatGPT. Then, once the students feel they have created a prompt strong enough to generate a robust argument, ChatGPT will debate itself using the responses generated by the students’ prompts.
“Because it’s hands-on in the classroom, it’s really transparent about what they’re doing, and it gets them to engage with it in a way that’s not risky from a standpoint of ‘Are we assessing what we want to assess?’” Van Portfliet said.
Jeremiah Contreras, teaching assistant professor in accounting, received the 2024 David B. Balkin, Rosalind, and Chester Barnow Endowed Innovative Teaching Award in part due to his adoption of AI technology in the classroom. In his Ethics in Accounting class, he uses ChatGPT to create a custom chatbot based on the Sarbanes-Oxley Act of 2002, which students can then use to learn about the law. He has also used ChatGPT to help students create team contracts. The result is a more immersive approach to learning.
Zhang and David Kohnke, senior IT director at Leeds, are collaborating on an initiative to incorporate AI into not only the Leeds curriculum but also operations and research. One major aspect of this initiative is the implementation of an AI grant proposal process, which will award faculty stipends for implementing AI tools in their courses. On the research side, Zhang plans to organize training workshops and information exchange sessions so faculty can learn how others are using AI in research. He stressed that the goal of the initiative is to organize all areas of Leeds.
“The trick here is not to do this just in one class or as one person, but rather the charge is really to mobilize our faculty and staff,” Zhang said.
As head of the initiative’s education committee, Contreras is working to incorporate AI into the Business Core curriculum. Contreras is also developing training seminars to help other faculty integrate this new technology into their lessons. The goal, he says, is for all Business Core classes to teach students how AI is being used in industry and how to use it ethically.
“It’d be irresponsible not to address it,” Contreras says. “It would be like not showing students about Word and Excel when those first came into play.”
There are reasons for caution when using generative AI in education. As a spring 2023 report from Cornell University explains, introducing generative AI tools without setting guidelines can prevent students from developing foundational skills. If a student is asking ChatGPT to draft their essays, they are not practicing necessary critical thinking skills. To Contreras, this is one reason AI skills should be taught. He emphasizes that “generative AI is most effective as a partner” and that we should teach students that even when leveraging AI, final decisions and outcomes should remain our responsibilities.
An April report from the National Institute of Standards and Technology also notes that AI models can reproduce systemic and individual biases, particularly when the datasets used to train these models are themselves biased, lacking data from marginalized groups, for example.
But Van Portfliet, whose research centers on ethical business practices, notes that bias is not an issue limited to AI. “The issue of bias within AI is not any more dangerous or risky than the bias in what material we select,” she says. “Bias is everywhere, and it’s something we have to be conscious of and try to overcome.”
She added that when using AI, questioning AI output and being more intentional in the prompts used can counteract the bias in the technology.
When it comes to other ethical issues surrounding AI use in the classroom, such as students using ChatGPT to plagiarize, Van Portfliet believes students should be encouraged to use AI as a tool rather than a replacement for critical thinking. In some cases, instructors might need to rethink methods of assessment, such as essays, which can be fully completed by generative AI.
Overall, Contreras and Van Portfliet believe AI should be discussed openly with students, not demonized.
“There are right ways to use AI, and there are wrong ways to use AI,” Van Portfliet says. “It’s important to acknowledge that they both exist, but it’s not a black and white issue. It’s not right or wrong to use AI on assignments full stop. It’s right or wrong to use it on a specific assignment or in a specific way.”