Supporting Student Learning while Reducing Overuse of Gen AI: Checklist of Evidence-based Strategies


Since the rapid proliferation of ChatGPT and other generative AI tools, campus educators have expressed concerns about students overusing generative AI–with potential negative impacts on student learning outcomes and growing numbers of Honor Code referrals. 

At the CTL, we empathize with these concerns. Further, we acknowledge that this is a complex problem without a simple solution. We recommend a multi-pronged approach–leveraging multiple evidence-based practices to increase student motivation, reduce uncertainty and anxiety, support student learning and self-efficacy, and otherwise create conditions that encourage students to do the hard work of learning for themselves, rather than shortcut their learning with AI.

The following checklist is designed to help you identify key intervention points in your courses where adjustments to course design, pedagogy, or assessment may better support student learning, while reducing their likelihood of overusing gen AI. Many of these practices were inspired by The Norton Guide to Equity-Minded Teaching, which is available for free to all educators.

As always, small changes can have big impacts. Rather than attempt to implement all of the strategies listed below, we encourage you to focus on just one or a couple changes you can make this semester. We will continue updating this checklist as we become aware of additional strategies and resources.

 

Emphasize Relevance to Increase Student Motivation

student on computer outside

 

Why Relevance Matters

Relevance refers to the extent to which students can identify themselves–their goals, interests communities, etc.–in whatever it is they are being asked to learn or do in your course (see The Norton Guide to Equity-Minded Teaching). Relevance is essential to motivating students to attempt new tasks and to sustain their efforts in the face of challenges. On the flipside, research has found that when students’ do not perceive a task as relevant to their own interests, they are more likely to cheat (e.g., see Pulvers & Diekhoff, 1999; Schraw et al., 2007; reviewed by Murdock & Anderman, 2006).

A concept closely related to relevance is authenticity–as in authentic assessment. Conventional assessments are often rote, instructing students to use a particular concept, method, or tool to solve a straightforward problem. Students are then evaluated based on whether or not they produce the correct solution. In contrast, authentic assessments challenge students to decide for themselves how to combine various skills from class to solve a problem; to grapple with “messy” problems and constraints they are likely to encounter in their future academic, professional, or personal lives; and to explicate and justify their problem-solving process.

  Example of Traditional vs. Authentic Assessments:

As an example, a traditional assessment in an introductory statistics or research methods class might require students to perform an independent t test on sample data and then report the t and p values. In contrast, an authentic assessment may instead present students with a realistic research question and dataset–for example, “On average, do second-year students take more credits than first-year students at CU? You have the following data for every undergraduate student enrolled at CU in Spring 2025: year at CU (first-year, second-year, third-year, etc.) and number of credits completed that semester.”). Then, it could ask students to identify which statistical test(s) they would use, explain their reasoning behind using that test, and describe at least one way in which their finding would be limited given the data to which they had access.

In sum, learning experiences should help students see the relevance of the course content to their own lives. This may include designing authentic assessments that help students transfer knowledge and skills from class to “real-world” problems and situations. Emphasizing relevance (and authenticity) should increase students’ motivation to do the work themselves, thereby reducing their likelihood of overusing AI.

Strategies for Emphasizing Relevance

Design assessments to be more authentic–open-ended, relevant to “messy" problems with realistic constraints, and focused on the process over the product. Learn more about designing authentic assessments from the Center for Innovative Teaching & Learning at IU Bloomington.

Create a student-centered syllabus statement in which you present the course learning outcomes to students in a way that helps them understand what specific skills, knowledge, behaviors, and experiences they will acquire through this course–as well as how these experiences will help prepare them for their future personal, civic, and professional lives. See our Inclusive Syllabus Checklist for more detailed recommendations. Consider incorporating this syllabus statement into an icebreaker activity in the first week of class (e.g., give students time to read it and then discuss what they are most excited to learn).

Administer a start-of-semester survey to learn more about your students’ goals hobbies, communities, real-world issues of concern, and lives. Throughout the semester, highlight connections between learning activities and students’ own interests. Learn more about creating and administering a start-of-semester survey. (Note: This can be modified to instead take the form of an in-class icebreaker or asynchronous discussion board.) 

  • Incorporate brief (2- to 5-minute) reflective writing activities that encourage students to reflect on what they are gaining through this course and discover novel connections to their own interests. This should be done on a regular basis, such as at the end of each class session, week, project, or unit.

  • Sample prompts:
    • “What is the most valuable concept or skill you learned through this [class session, unit, project]? How will it help prepare you for a long-term goal you have?”
    • “How does what we discussed in class [today, this week, this unit] relate to a real-world issue you care about or to a challenge you’re navigating in your own life?”

Enhance Transparency to Reduce Student Uncertainty, Anxiety, and Potential Procrastination

students in a large lecture hall with their laptops open

 

Why Transparency Matters 

Transparency refers to the extent to which information has been made clear, explicit, and accessible to all students (see The Norton Guide to Equity-Minded Teaching). Enhanced transparency has been shown to reduce student uncertainty and anxiety, both of which may otherwise contribute to procrastination. Basically, when students know what is expected of them, they can dive into new tasks without hesitation and with a sense of confidence. On the flipside, research has found that, when transparency is lacking, students are more likely to cheat (Beasley, 2014Bretag et al., 2019). In sum, enhancing transparency should reduce student uncertainty, anxiety, and procrastination–as well as their likelihood of overusing AI.

Strategies for Emphasizing Transparency

  • Before asking students to get to work on a new activity, assignment, or assessment, use the TILT framework to make clear and explicit the purpose, task, and criteria (TILT stands for Transparency in Learning and Teaching; Winkelmes et al., 2016). Learn more about TILTing learning activities.

    • Clarify the purpose:

      • What knowledge, skills, and experiences will students practice or gain through completing this activity, assignment, or assessment?

      • How will these knowledge, skills, and experiences help prepare them for their own longer-term goals, future courses in this academic program, graduate or professional programs, etc.? (Note: This also reinforces relevance.)

    • Clarify the task:

      • What precisely is the task at hand?

      • How should students get started?

      • For more complex projects, what are students expected to do step-by-step, and when should they have completed each step to stay on track?

      • What are some common mistakes and stumbling blocks, and what can students do to support their success if they get stuck?

      • How, when, and where can students get feedback and access other relevant resources?

      • What are the relevant policies, including policies regarding generative AI use and documentation?

    • Clarify the criteria: On what specific standards or criteria will students’ work be evaluated?

      • Share a rubric that explicates precisely how student work will ultimately be evaluated, or consider co-creating the rubric with the students. This rubric should be shared when the work is first assigned, not after students have submitted their work for evaluation. Importantly, there are many different types of rubrics that can be tailored for different learning outcomes and course contexts!

      • In addition to (or instead of) a rubric, consider providing samples of A-level, B-level, and C-level student work (or samples that “exceed expectations,” “meet expectations,” or “need revision”–whatever levels are aligned with your rubric or grading approach). Importantly, only share samples of real student work if you have obtained their written permission in accordance with FERPA.

      • Especially for more complex projects, incorporate student self-assessments and/or peer assessments to provide opportunities for students to give and receive constructive feedback on their work before final submissions are due. Self- and peer assessments are also helpful for identifying “muddy” evaluation criteria that need to be further clarified to the students. Importantly, self- and peer-assessments should be aligned with the rubric (or whatever you are using to explicate the grading criteria), and provide structured guidance to students on how to provide themselves or their peers with specific and actionable feedback. (This recommendation is also aligned with support, addressed in more detail below.)

    • Clarify your expectations and policies around AI use and documentation specifically:

Consider using the CTL’s guide to The AI Assessment Scale (Perkins et al., 2024) to create an even more granular AI policy for this particular activity, assignment, or assessment. This resource also includes examples and tips for communicating with students about permissible uses of AI.

Consider adopting the Canvas template, which was designed with extensive educator and student input to make it easier for students to find the course syllabus, grading-related information, and other resources and information critical to supporting their own success.

Implement the Grade for Student Success practices to ensure your grading practices, grading policies, and students own grades and feedback are clear and easy to find in Canvas.

Prioritize the Learning Process by Providing Opportunities for Students to Get Practice and Make Mistakes

two girls happily doing work together in a lecture hall

 

Why Practice Matters 

A wealth of evidence from neuroscience and psychology has shown that practice is essential to strengthening neural connections and consolidating new information in one’s long-term memory. In other words, providing students with opportunities to practice new knowledge and skills is essential to supporting their learning! Furthermore, allowing students to take risks and make mistakes without penalty while they practice affords opportunities for students to receive corrective feedback and address misconceptions before building knowledge on these misconceptions. It may also help them develop a growth mindset–the belief that one’s ability is not fixed from birth but, rather, can be grown through practice, effort, persistence, and seeking out support when needed (Dweck, 2006). Developing a growth mindset should, in turn, improve students’ self-confidence and resilience in the face of setbacks and failure. In sum, providing opportunities for students to get practice and make mistakes prioritizes the learning process over “getting it perfect” on the first try. Additionally, it removes many of the incentives for overusing gen AI.

Strategies for Providing Opportunities for Students to Get Practice and Make Mistakes:

Offer multiple “low-stakes” or “no-stakes” formative assessments (worth few or no points) instead of or in preparation for high-stakes summative assessments. Examples of formative assessments include Classroom Assessment Techniquies (CATs) (e.g., in-class clickers), short quizzes, problem sets, or mini-essays intended to check in on student learning while learning is in progress, whereas examples of summative assessments include exams, papers, or projects intended to evaluate what students have learned by the end of a period of instruction. 

If formative assessments are intended as preparation for summative assessments, they should be aligned; that is, formative assessments should offer students opportunities to practice the same knowledge and skills they will be asked to demonstrate on those later summative assessments. Students should not be asked to demonstrate knowledge or skills for the first time on a summative assessment.

Pair formative assessments with regular opportunities for students to receive timely constructive feedback, including guidance on how to improve their work and better support their own learning. Learn about best practices for effective feedback. If individualized instructor feedback isn’t feasible (e.g., in large-enrollment courses), consider instead utilizing structured peer-assessments or self-assessments. 

If possible, set aside time in class for students to work through particularly challenging question types with your guidance. Share common mistakes, as well as tips for avoiding them.

To help students prepare for exams and other summative assessments, provide practice exams, question banks, or problem sets that students can complete just for practice–not for points or credit. For example, you may share “retired” exams, questions, or problems from past semesters. Keeping practice materials as similar as possible to what students will experience on the real exam helps simulate the exam experience, while also allowing students to accurately gauge their preparedness and adjust their study methods accordingly. 

As appropriate to the learning outcomes and course context, replace more complex analytic rubrics with simpler checklist-style rubrics, such as those used in specifications grading (e.g., see Nilson, 2023). Such rubrics allow for quick credit/no credit grading and detailed feedback. This style of rubric is usually just two columns. The first column explicates the specifications–a list of basic criteria that must be met for the student to demonstrate what would usually constitute B-level proficiency on the learning outcomes. The second column includes boxes to check off whether the student met that specification. There is also a section at the bottom to provide constructive feedback. If the student meets all specifications, they earn full credit. If they miss one or more specifications, they earn no credit; however, they are typically given the opportunity to reattempt the assessment for full credit (see the next bullet for options). Because grading is quick, this buys you back time to give students detailed feedback on how to improve their work and point them to resources for supporting their own learning and success. Credit/no credit grading (as opposed to points-based grading) with reattempts also shifts students’ attention away from why they lost certain points and toward specific actions they can take to improve their work in the future. Learn more about different types of rubrics from the CTL.

Consider allowing students to reattempt major summative assessments for partial or full credit. Allowing students to retake exams, revise and resubmit papers, or otherwise reattempt major summative assessments for credit provides students with an opportunity to reflect on and learn from their mistakes, while also promoting a growth mindset and signalling your commitment to prioritizing student learning. There are many options for how to implement reattempts. For example, in small courses, it may be feasible to offer unlimited reattempts on all major summative assessments. In most medium or large courses, it will likely be more feasible to offer just a single reattempt for certain pre-selected summative assessments or to give each student a set number of tokens for the entire semester that they can exchange for opportunities to reattempt whatever summative assessments they choose. If you are concerned about a “pile-up” of grading from reattempts at the end of the semester, provide assessment-specific deadlines for reattempts that are spread out throughout the semester. Learn more about growth-oriented grading practices from the Grading for Growth Blog by David Clark, Robert Talbert, and colleagues.

  • Reflect on the overall assessment design of your course, and make modifications as needed.

    • Sample questions to help guide your reflection:
      • How many and what types of assessments are students asked to complete throughout the semester?
      • How much does each of these assessments contribute to students’ final grades?
      • Are there sufficient opportunities for students to practice core knowledge and skills via formative assessments prior to summative assessments (if applicable)?
      • Are formative and summative assessments aligned?
      • Can students “afford” to make mistakes and still recover their final grade?
      • In general, if final grades are determined by just a few high-stakes summative assessments (e.g., an individual assessment is worth 25% or more of the final grade), there are insufficient opportunities for practice, and/or students cannot “afford” to make mistakes, consider modifying your overall assessment design based on the recommendations above.

Build in Additional Forms of Support to Promote Student Success

teacher helps students in a classroom

 

Why Support Matters

Beyond the many strategies already discussed above, support can come in a variety of additional forms-from providing instructor or peer feedback to raising awareness around campus resources. When students feel supported, this promotes their learning, well-being, and success (reviewed in The Norton Guide to Equity-Minded Teaching). On the flipside, research has found that students are more likely to cheat when they perceive that they don’t have enough time or support, particularly when attempting high-stakes summative assessments (e.g., see Beasley, 2014Lang, 2013). In sum, providing students with additional forms of support should reduce their risk of overusing AI.

Additional Ways to Provide Support

If students are asked to complete a larger or more complex project, utilize scaffolding to break it down into smaller, more manageable steps. Provide more support at the start and less support as students gain confidence. Additionally, build in opportunities for instructor feedback, peer assessment, and/or self-assessment along the way.

Set aside time to familiarize your students with evidence-based learning strategies, such as retrieval practice, elaboration, and interleaving (e.g., see Make it stick: The science of successful learning).

Incorporate exam wrappers or other self-assessments following major summative assessments to encourage students to reflect on how they studied or otherwise prepared for the assessment, identify strategies that worked well vs. not as well, and identify strategies or resources they could better utilize to support their learning in the future (ideally, including the evidence-based learning strategies mentioned above). Learn more about exam wrappers from U. Iowa and this article on when and how to use exam wrappers from IU Bloomington’s Center for Innovative Teaching & Learning

To help students reflect on how their use of AI may be helping or harming their learning, consider incorporating metacognitive prompts such as those from Marc Watkins’s AI Assisted Learning Template.

  • Sample prompt: “Did bouncing ideas off AI spark your creativity? Were there any new exciting directions it led you toward, or did you wind up preferring your own insights independent of using AI?”

Especially in larger courses where it may not be feasible to give students as much individualized attention, consider pairing each of your students with a peer “study buddy” or “accountability partner.” Provide opportunities in-class (if possible) for students to get to know their buddies and to normalize them working with their buddy to find answers to their questions and stay on track.

Add a module or page to your Canvas course to help raise students’ awareness of departmental and campus resources (e.g., tutoring, mental health and wellness services). Note that the Canvas template includes a campus resources page.

In general, remember that students are often juggling many competing roles, responsibilities, and demands on their time. In addition to school, this may include working full- or part-time, raising children, caring for aging family members, long commutes, etc. Reminding yourself to think of your students as whole people may help with communication, empathy, and rapport building.