On March 21, Google announced that it will begin a slow roll out of its chatbot dubbed Bard, making the artificial intelligence platform available to a small number of users. The announcement follows last year’s release of a similar platform called ChatGPT developed by OpenAI. The advanced chatbots can turn simple text prompts into tweets, poetry, songs, term papers and even books.

Some critics have raised concerns that students could use these platforms to plagiarize essays or college applications. Others worry they could put certain professionals out of work. 

CU Boulder experts are available to discuss what the future holds for these AI platforms and what it might mean for people. 

Learn more about ChatGPT and AI large language models (LLMs).

AI ethics & higher education

Casey Fiesler is an associate professor in the Department of Information Science who studies technology ethics and online communities. She’s posted several videos about ChatGPT to her popular TikTok channel and has explored how educators might use AI platforms to help students learn. 

AI strengths and weaknesses

Daniel Acuña is an associate professor in the Department of Computer Science and develops AI tools to mine scientific knowledge from vast, unstructured datasets. He recently discussed the strengths and limitations of ChatGPT in an article for The Conversation.

AI in business

Kai Larsen is associate professor in the Leeds School of Business and co-author of the book Automated Machine Learning for Business. He used ChatGPT to write a children’s book about his pet bird called Dewey the Unhappy Cockatoo and can discuss the future of AI in business, education and more.

AI and the future of K-12 education

Peter Foltz is a research professor in the Institute of Cognitive Science who has decades of experience designing “large language models”––a class of AI tools that includes ChatGPT and Bard. He is executive director of a $20-million National Science Foundation institute, which explores how AI might transform K-12 classrooms around the world.

On AI and the law

Harry Surden, professor of law, is a former software engineer who focuses on the intersection of law and technology, including artificial intelligence and legal automation. He can explain why ChatGPT’s launch has surprised even AI experts and why it’s taken Google’s Bard so long to catch up. He can also discuss how LLMs can have the potential to change society, from both addressing and contributing to discrimination to improving access to justice. Surden has written extensively on topics ranging from ethics of AI in law to values embedded in legal artificial intelligence.