Many units on the CU Boulder campus have been working in partnership with the Teaching Quality Framework (TQF) Inititiative to better align their teaching evaluation practices with known scholarship on teaching evaluation by: a) examining their current teaching evaluation practices; b) identifying or creating tools that better assess teaching quality and fill gaps within their current evaluation practices in order to align the multiple measures from three key voices (peers, self, and students); and c) implementing these tools along with procedures for their use. These are designed to fulfill university requireiments - see "CU Policies Related to Measuring Teaching Effectiveness" for an annotated list of CU policies that support academic units defining their own measures of teaching quality and the use of multiple measures / multiple voices in evaluation.

The following tools are designed to align with an overarching rubric (2 page version) that categorizes seven dimensions of teaching, and draws from three voices (peer review, self-evaluation, and student evaluation). Working with the TQF-team, departmental teams utilizing the Departmental Action Team (DAT) model (Corbo et al. 2016) have developed tools and processes for measuring teaching quality for the purpose of merit and / or reappointment, tenure, and promotion. Below we provide links to generic templates and department-specific examples of these materials, which currently fit into four broad categories: Peer Observation, Self-Evaluation, Student Evaluation, and Applying the Overarching Rubric to an Evaluation Form. Please note that many of the examples are still in development (notations below: “in use” = currently being used by the department in their evaluation process(es); “in review” = being reviewed and/or piloted by members in the department external to the TQF DAT team; “in development” = a near final draft is that is approaching review). As additional “in use” versions become available we will update this list.

Jump to:

 

Peer Observation

Unstructured peer classroom observations, i.e., those that are not based on a set of core criteria, can result in inconsistency and do not always address teaching practices that are valued by a department (AAAS 2012). For this reason, scholarly literature on teaching evaluation recommends that academic units articulate the best teaching practices for their field and define core criteria to use in the observation process (AAAS 2012). Feedback on teaching can be more effective in promoting growth and improvement when it focuses on specific issues, contains concrete information, and is based on specific data (rather than general impressions) (Brinko 1993). Toward these ends, departments working with the TQF have developed standardized peer observation protocols, peer evaluation plans, and associated materials. (return to top)

Peer Observation Protocols

These peer observation protocols are intended to provide structure and consistency to peer classroom observations. The templates and examples below draw heavily from the UTeach Observation Protocol (UTOP: https://utop.uteach.utexas.edu/) and the Oregon Teacher Observation Protocol (OTOP): Wainwright et al. 2003). 

​​(return to top)

Active Learning Addendums

Many departments have chosen to explicitly include active learning as a category for peer observation; in departments where faculty may be less familiar with active learning, a more detailed addendum may be added to the peer observation protocol to provide additional guidance.

(return to top)

Peer Evaluation Plans

Peer evaluation plans should explicate departmental processes around how peer observation fits into their teaching evaluation plans, including how the peer observation protocols will be used.

  • Template
  • Department examples
    • GSLL (in use)
    • HIST (in use)
    • MCEN (includes observation protocol)(in use)
    • MATH (in development)
    • MCDB (in use)

(return to top)

Letter Writing Guides for Peer Observation

In some departments, the standardized peer observation protocol is not directly used but rather is designed to provide guidance to peer reviewers around what should go into a letter. Similar to peer observation protocols, these guides should include structured, consistent categories for review.

  • Example from MATH (in review)

(return to top)

Peer Observation Cover Letters

Because peer observation materials compiled for reappointment, tenure, and promotion dossiers may differ, in some cases significantly, from what had been submitted in the past, a cover letter that can be attached to peer observation materials can provide rationale for the change.

(return to top)

 

Self Evaluation

Self-reflection is a key element of a complete teaching evaluation. Often the instructor’s voice is lacking in the evaluation process, particularly for annual merit. Many departments are working on incorporating self-reflection into their merit processes and / or developing guidelines for writing teaching statements for reappointment, tenure, and promotion. (return to top)

Self-Reflection in Merit

One way of incorporating the voice of the faculty member being evaluated into the evaluation process is to allow space for them to reflect on their own teaching practices in the annual merit process. Typically departments have created a list of guiding questions that align with their values and the TQF assessement rubric, from which faculty can select the most relevant to their own practices.

(return to top)

Teaching Statement Guidelines

While a teaching statement is required for reappointment, tenure, and promotion dossiers, typically there is very little guidance on what should be included in these statements. Several departments are developing such guidance.

(return to top)

 

Student Evaluation

Many units on campus rely heavily on end-of-semester student evaluations of teaching (SETs, aka FCQs at CU), particularly for annual merit. While students are one of the three key voices in evaluation of teaching, over-reliance on SETs/FCQs as the primary/sole measure of student voice can be problematic (see “Role of Students in Evaluation: Student Voice” in our FAQ). The Boulder Faculty Assembly (BFA) recently recommended that FCQs (SETs) be used primarily as formative feedback rather than summative assessment, that evaluators be made aware of potential bias in FCQs, and that the omnibus questions (rate the instructor and course overall) be removed (BFA-R-2-102918.4). In addition to rethinking how FCQs/SETs are used, many departments working with the TQF are developing / improving other forms of student voice, including classroom interviews and student letters. (return to top)

Recommendations on the use of SETs (FCQs)

(return to top)

Classroom Interviews

As departments explore better ways for incorporating student voice, many have begun working on guidelines for conducting classroom interviews. The Faculty Teaching Excellence Program (FTEP) at CU Boulder currently offers a Classroom Learning Interview Process (CLIP) where students are placed into small groups to discuss their responses to questions posed about their learning experiences in the course. The confidential feedback from this formative process is then shared back with the instructor. 

While many departments allow faculty to submit this confidential feedback as part of their merit and/or reappointment, tenure, and promotion processes, several departments have developed modified versions that can be incorporated into their peer observation processes (see Peer Observation Protocol examples above).

(return to top)

Student Letters

As departments explore better ways for incorporating student voice, several have considered student letters because while many departments solicit letters from current and past students to include in a dossier for reappointment, promotion, and tenure, these solicitations often do not include guidelines to students on what to include in their letters or how to write their letters.

In collaboration with the TQF, the Mechanical Engineering Department and other departments/units have developed new solicitation letters that include specific guidelines for what students might reflect on and include in their letter. Noting that the expectation is not that students would write about every suggestion but rather pick from among the suggestions those that are most relevant to their experiences.

(return to top)

Mentoring

Working with the MCEN department, we've compiled external resources related to mentoring and assessing mentoring. 

(return to top)

Applying the Overarching Rubric to an Evaluation Form

All TQF departments have worked with the TQF assessment rubric when designing their assessment materials (i.e., ensuring that the measures they develop align with the seven principles in the rubric), and some have gone on to adapt the rubic as needed to align with their discipline and use it in their evaluation processes.

While materials produced by departments are aligned with the overall rubric, departments are also developing compact representations, or summaries of the multiple data sources into a a single form that can be used as an assessment rubric. Germanic & Slavic Languages & Literatures has customized the rubric to pull evidence from self-reflection, peer observation protocols, review of syllabi/course materials, and FCQs (SETs) for annual merit evaluation of teaching. (return to top)

Merit Evaluation Rubric Forms

(return to top)

 

References

American Association for the Advancement of Science. 2012. Describing and Measuring Undergraduate STEM Practices. A Report from a National Meeting on the Measurement of Undergraduate Science, Technology, Engineering, and Mathematics (STEM) Teaching. https://live-ccliconference.pantheonsite.io/wp-content/uploads/2013/11/Measuring-STEM-Teaching-Practices.pdf

Brinko, K. T. 1993. The practice of giving feedback to improve teaching: What is effective? The Journal of Higher Education 64(5): 574-593. https://www.jstor.org/stable/2959994

Corbo, J. C., Reinholz, D. L., Dancy, M. H., Deetz, S., & Finkelstein, N. 2016. Framework for transforming departmental culture to support educational innovation. Physical Review Physics Education Research, 12(1), 010113. https://doi.org/10.1103/PhysRevPhysEducRes.12.010113

Wainwright, C. L., Flick, L. B., & Morrell, P. D. 2003. Development of instruments for assessment of instructional practices in standards-based teaching. Journal of Mathematics and Science: Collaborative Explorations, 6(1), 21-46. https://doi.org/10.25891/NTKY-AX16