Ethical Considerations of Generative AI

The ethical considerations associated with generative AI are diverse and vast. Research about AI ethics is emerging at a precipitous pace. Thus, the goal of this resource is to provide a succinct snapshot of the most prevalent ethical issues within an educational context that may directly impact your teaching philosophy and the student experience. Whether you reject the notion of using AI in the classroom, are an AI enthusiast, or are somewhere in the middle, we invite you to consider student experiences and the ethical implications of generative AI in teaching and learning.
For a deeper dive into the Ethical Considerations of Generative AI, consider completing the AI Literacy Foundations Canvas course developed collaboratively by the Office of Information Technology and the Center for Teaching & Learning.
Understanding and Navigating AI Use in the Classroom
As generative AI tools become increasingly common in higher education, it’s important to create clear, thoughtful, and student-centered approaches to their use in the classroom. This resource explores key considerations for fostering transparency, building trust, and supporting student autonomy in large enrollment courses. Each section below offers guidance on setting expectations, avoiding reliance on AI detectors, respecting student choice, and addressing broader impacts like language homogenization and predatory marketing. Click on each topic to learn more and find actionable steps for your teaching.

The variability of course-level AI policies makes it challenging for students to distinguish between acceptable uses across their classes. Therefore, it is crucial to communicate your AI use policy frequently and in multiple places. In a 2025 study investigating Undergraduate Perspectives on AI at CU Boulder 2025 (unpublished), 73% of respondents reported wanting guidance from their professors about the acceptable use of AI in their class. Being transparent about why, when, and how students are allowed to use AI for class activities or assessments is particularly important. The University of Sydney developed a Two-Lane Approach to balance both AI-Immune and AI-Incorporated assignments in the classroom. For ideas on making assignments AI-resistant, explore suggestions on the CTL’s AI and Assessment web resources. Including an AI Use statement in your syllabus is an essential component in communicating your expectations for AI use to students in your course. Members of the Boulder Faculty Assembly created an AI Use Syllabus statement guide as a starting point to craft your use statement. Explore the CTL’s AI Assessment Scale page for more details and clarity on how to meaningfully integrate the different suggested levels of AI use in your assessments.
Having a dialogue with students about your expectations about AI use (or non-use) in your classroom not only opens up communication but also builds trust. In addition to discussing your course expectations of students' use of AI, disclosing your own use of AI fosters transparency in the classroom. While unintentional, a New York Times article found that students may perceive you as hypocritical if you do not disclose your use of AI while limiting/prohibiting their use of these tools in your class. Moreover, using AI to ‘assign’ a grade or provide feedback to students gives the perception that faculty are merely assigning grades without actually reading their work. As such, we do not recommend solely using AI to assign grades. To avoid such situations, adopt an open disclosure approach in your teaching. In the unpublished study referenced above, 78% of CU Boulder student respondents reported that they wanted faculty to disclose their use of AI. Disclosing your use of AI starts the reciprocal process of building trust. Modeling behaviors that you’d like to see in the classroom sets a standard of mutual respect and fortifies trust between you and your students.
Pro tip: Create transparency in the classroom by co-creating an AI policy with students. For first-year students, this may be the first time they’ve conducted this kind of work, so expect them to be reticent or think the exercise is "weird"’ Having students participate in the policy-making process provides them with agency, responsibility and promotes ownership of their actions.

While students use generative AI in varying ways, we also know that some students abstain from using these tools. The campus-wide survey of undergraduate students referenced above revealed that almost 9% of respondents don’t use AI at all, and nearly 12% report not using AI on academic assignments. When asked why students decided not to use AI, 44% reported having ethical concerns. If you integrate AI use into your courses, recognize that not all students will feel comfortable using AI tools for academic purposes. Just as we recognize and accommodate religious beliefs and observances, we should also allow students to complete tasks without relying on AI or provide alternative methods for completing academic assignments. This respect also extends to refraining from submitting students' work through an LLM, AI detector, or any AI-enhanced tools without their explicit permission.

As real or perceived use of generative AI by students grows on college campuses everywhere, it can be tempting for educators to use AI Detectors to check the authenticity of student work. “Student cheating” and the use of AI Detectors to police student use is one of the more divisive generative AI issues on our campus. CU Boulder educators hold varying sentiments about AI Detectors, from using them as productive tools to spark dialogue about unauthorized generative AI use with their students, to checking work with detectors as a standard practice, to questioning the fairness of using detectors and the erosion of trust that use can engender in students.
Currently, CU Boulder does not support any AI Detection tools due to concerns about inaccurate results as well as privacy concerns. For example, Turnitin’s AI Detector was disabled by OIT when it failed evaluation due to issues with inaccuracy. Consequently, some educators have resorted to using other online AI Detection tools which are constrained by similar limitations to Turnitin’s AI Detector. Although the research literature on the accuracy, reliability, and bias of AI detectors is dynamic and will inevitably expand in the coming months, initial findings indicate that AI detectors suffer from non-zero false positive error rates (i.e., they incorrectly classify human writing as AI-generated), often much higher false negative error rates (i.e., they incorrectly classify AI-generated writing as human-written), they are relatively easy to “fool” (i.e., writers can evade detection through the use of AI humanizers, AI paraphrasers, or manual editing), and they appear to be biased against non-native English writers (Weber-Wulff et al., 2023; Liang et al., 2023; Perkins et al., 2024).
Regarding false positives, specifically, it is important to acknowledge that even very low false positive error rates can “compound” across multiple assessments. For example, let’s assume an AI detector has just a 1% false positive error rate. For a student who has written their own final paper, there is only a 1% chance of their writing being incorrectly flagged as AI-generated (though note even a 1% chance may be considered “too high,” given the potentially devastating consequences for a student wrongly accused of cheating). But what if, in this writing course, the student is asked to write four papers–two essays, a midterm paper, and a final paper? In that case, there is a 3.9% chance that their writing on at least one of the four papers will be incorrectly flagged as AI-generated. Over multiple courses every semester, a student’s probability of having their writing incorrectly flagged as AI-generated will become relatively high. And, indeed, according to the campus-wide survey mentioned above, 10% of CU Boulder undergraduate respondents reported that they have had an assignment identified as AI-generated, even though they did not submit any exact AI output in their assignment.
An additional concern with using any AI Detection tool is the potential for data security and privacy violations. Running student-generated content through AI detectors can expose security breaches and potentially violate student data privacy, as mandated by FERPA regulations. Also, students may be uncomfortable with their intellectual property being released into the “black box” of LLMs, with no control over how that data is used in the future. (Mathewson, 2025 June 26).
To avoid the temptation to use an AI Detector, we suggest using proactive approaches with your students, such as re-structuring assessments to be AI-resilient. Above all, if you are concerned that a student used AI in a way that short-cuts their learning, invite them in for a conversation so that you can better understand their perspective and their process, rather than rushing to accuse them of academic dishonesty or worse, failing them without due recourse.

It is well-known that the large language models that power the most widely used generative AI tools, such as ChatGPT, are trained on standard English rather than other dialects and languages. Given that large language models use a predictive algorithm to ‘string’ words together, the output lacks variety and complex sentence structures. Emerging studies show evidence that frequent use of such tools to generate, augment, or edit human-created content can have a homogenizing effect on AI outputs, and even worse, on the humans behind the outputs. Information scientists warn that adopting language and ideas suggested by AI trained on a biased dataset has the potential to subtly change and constrain what is deemed as normal and appropriate in public discourse (Chayka, 2025, June 25; Noble, 2018). The homogenizing effect of generative AI is reductive and can lead to a demeaning erasure of dialectical and cultural nuance.

Profit-driven corporations, including those peddling gen AI-based writing tools and “homework helpers”, aggressively market and target College-aged students with their products and services. Onboarding young people at this critical stage in identity development has the potential to capture not one but three consumers at once: the current-day student, the student’s parents or guardians, and the student as a future consumer. This is a reason why campuses like CU Boulder are often overwhelmed by educational technology product endorsements - and why we have the Office of Information Technology to vet those products for relevance, safety, and security considerations! Social media influencers amplify the marketing effect on typical college-age students by touting homework hacks and shortcuts that they can “leverage” by using widely available AI tools. Popular contract-cheating or AI platforms (e.g., Chegg, ChatGPT), frequently co-opt the language of being a service to “help” students and provide study support. Typically marketed as free subscriptions to students through verified university email addresses, particularly during critical periods such as midterm or finals week, it can give them a veneer of officiality and a means of support. According to Gallant and Retiinger, since instructors often encourage students to ask for help when needed, students may resort to these tools for support when struggling without realizing they are violating course policy. Alternatively, despite a clear understanding of the ethics of using such tools, students may justify their use as a means to seek help, thereby neutralizing any ethical dilemmas that might exist. Being cognizant of the powerful impact of these predatory marketing strategies can help us develop empathy for students who may use AI in their learning. It can also motivate the proactive integration of discussions on AI ethics in your classes.
Teaching Tips on AI Ethics
Co-create an AI-use policy, wherein students discuss their values around fairness, due process, or learning, behaviors, and AI-use that would uphold those values versus those that would undermine those policies.
- For first-year students, this may be the first time they’ve conducted this kind of work, so expect them to be reticent or think the exercise is ‘weird.’ Having students participate in the policy-making process provides them with agency, responsibility and promotes ownership of their actions.
- Regularly revisit these norms through discussions, especially before or after major assessments. Consider linking these policies to honor pledges students may be asked to sign before or after major assessments.
- Have students read a paper, an article, watch a documentary, or engage in a conversation with AI chatbots on the ethical concerns listed above, and have them reflect on the implications for their use of AI.
Integrate conversations on the ethics of using AI, either specific to your disciplinary context (e.g., scientific publishing, copyright, legal professions, film-making, climate action) or more general case studies to deepen student understanding of the value of ethical conduct.
Resources
Artze-Vega, I., Darby, F., Dewsbury, B., & Mays Imad. (2023). The Norton Guide to Equity-Minded Teaching. W. W. Norton.
Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. MIT Press.
Center for Teaching and Learning. (n.d.). Educational Technology Research Assistants. University of Colorado Boulder. https://www.colorado.edu/center/teaching-learning/programs-services/undergraduate-student-programs/educational-technology-research-assistants-etra-0
Debelius, M., Chehak, M., Le, K., Oh, S., Lyons, S., Kim, G., Nutting, H., & Holtschlag, J. (2025). Designing an AI policy. International Journal for Students as Partners, 9(1), 151–160. https://doi.org/10.15173/ijsap.v9i1.5836
Killian, K. (2023, July 21). 5 steps to update assignments to foster critical thinking and authentic learning in an AI age. Faculty Focus. https://www.facultyfocus.com/articles/effective-teaching-strategies/5-steps-to-update-assignments-to-foster-critical-thinking-and-authentic-learning-in-an-
Mills, A. (n.d.). Micro-lessons with AI [Google Slides presentation]. https://docs.google.com/presentation/d/1RP3OpnIk2elJnAu21bM1E--u5QTkxUXb/edit?slide=id.g28cd987cbf7_0_1093#slide=id.g28cd987cbf7_0_1093
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Timmerman, E. (2024, April 29). AI is forcing universities to rethink plagiarism. Vox. https://www.vox.com/the-gray-area/418793/chatgpt-claude-ai-higher-education-cheating
UNESCO. (n.d.). Ethics of artificial intelligence. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
University of Colorado Boulder Libraries. (n.d.). Ethics, policies & copyright – Generative AI CU Boulder. https://libguides.colorado.edu/c.php?g=1343731&p=10074192
Want to Stay Up-to-Date on the Center for Teaching & Learning’s AI Events & Resources?
Email Blair Young to be added to our Teaching & AI Listserver
If you prefer to self-enroll, login to Google Groups and search for Teaching & Learning with AI Community of Practice