Andrew Martin

Ecology and Evolutionary Biology

Target Course:

I taught evolutionary biology, a mostly biology majors core course. There were 60 students. It is a general survey course, although the emphasis is on developing analytical skills for interpreting data and testing alternative hypotheses.

My goals:
I had three assessment goals. The first was to evaluate how well I was communicating the main concepts of specific lectures and how well students were learning what I was trying to teach. In short, I was interested in whether students could effectively sort out and identify the important bits from the morass of information that constitutes a lecture. Second, I was interested in assessing how students gained a more sophisticated understanding of the main goal of the course, namely how does evolution happen? Finally, I was interested in working on developing clicker questions that facilitated peer learning and discussion.

What I did:
For the short-term (lecture-based) assessment, I used the 1-minute written summary approach. I handed out 3 x 5 note cards and asked students to write about the three main points of the lecture on one side and on thing that they did not understand on the reverse side. Sometimes I implemented this strategy in the middle of the class and sometimes at the end of class. I would tally the results and then present the results at the beginning of the next lecture having identified what I thought were the main points and explaining why the topics some students included on their list were not necessarily the most important. For example, in a lecture on the evolution of cognition, students listed 15 different topics that they thought were the main points of the lecture and discussion; however, only 4 of the identified topics were what I would consider the main issues (Figure 1). I showed the list and went through what I thought were the main points and revisited the important concepts and also revisited some of the issues I thought were peripheral and talked about why they were peripheral.

Figure 1. Histogram showing the frequency that a particular topic was described as something important that the students learned. Asterisks indicate topics that are my assessment of the main points.

I used this strategy several times during the semester, each time following the same format. What I learned from this strategy is that it is important to repeatedly visit the main points of the lecture and to LIMIT my learning goals for the course.

For the long-term overall strategy for the course, I asked the same question—how does evolution happen—three times: once at the beginning of the course, once in the middle, and once at the end. The responses were submitted by all students online and the data were analyzed using counts of word frequency under the assumption that it is possible to gauge the collective understanding of the topic from the bits and pieces of the language students use to answer the question. While the student’s answers provide more information than simple word frequencies, and it is worth going through the text with a more sophisticated analytical approach, the word clouds (see figure 2) did reveal some important information.


Figure 2. Two examples of word clouds based on surveys from the midpoint (left) and end (right) of the course. Note that while many of the words are the same, they change in relative frequency. Also evident is that the kinds of words in the fraction of rarely used words changes.

First, there were some changes in the frequency of particular words that suggest students gained a better understanding of how evolution happens. For example, the relative frequency of the word population changed from an initial relative frequency of just less than 0.5 (namely about 50% as frequent as the most frequent word) to 0.86 to 1.0 (Figure 3). This indicated that students learned that evolution is a change in the characteristics of populations rather than individuals. Similarly, the frequency of genetic drift change from zero (not used) to about 0.6 and then dropped (Figure 3). This humped-shaped trajectory can be explained by the fact that immediately prior to the 2nd survey, we had focused on genetic drift as a mechanism of evolution but by the end the students had a more balanced view and so fewer students included this word. Second, what was probably the most interesting result (to me) is that the types of words that were unique to a particular sample episode (beginning, middle, end) changed. For example, at the beginning students used words like able, adapt, best, suited and favor and by the end had abandoned these words and were now using words like phenotype, deleterious, sexual, pleiotropy, constraints, correlation, and effect. When combined with some of the changes in the frequency of words used throughout the course, it was clear that there was a clear increase in student’s sophistication. Overall, this approach was not particularly informative both because of the limitation imposed by the question posed and by limitations associated with only using word frequency as a measure of understanding.

Figure 3. The relative frequency of two different words: population and drift.

Finally, I developed clicker questions that did an excellent job stimulating peer learning and discussion. In the past my clicker questions typically had a right answer. I have replaced most of these types of questions with ones that either have more than one correct answer or for which the correct answer requires some serious thought. My lectures were typically punctuated by such question every 8 to 15 minutes and I would typically allow students to discuss the questions for two minutes and either show them the histogram of the results as a means of discussing the subject or having students defend their answers without showing the results. Although I did not do an assessment of this strategy, judging from the comments during the discussion, and the level of engagement during the 2 minute period for student discussion, I would say that I am on the right track.

What I learned
My take home messages:

1) assessment is a good thing to do for both student learning and my understanding of what students are learning and should be used frequently and be something that is as important as the substance and delivery of the course’s content;
2) assessment is time consuming;
3) assessment provides a means of breaking up lectures so that students do not experience cognitive overload and enables students and the professor to reflect on what is being learned and how the learning happens;

What is next
I have several goals:

1) continue to improve clicker questions to achieve a high level of peer learning and discussion in large classes;
2) use the 1-minute response approach more frequently as a means of getting students to reflect on their immediate learning experience and as a means for me to gauge what they consider important and what they think is not important;
3) use the overall course assessment strategy in a more meaningful way using on-line assessment strategies that enables quantitative analysis;
4) include a pre-assessment where I focus on gaining information about their motives, knowledge, experience, and study habits—the key is to ask the right questions;
5) cast pre and post assessments in terms of core or essential concepts.

Least buck for most bang

1) Implement strategies that can be done on-line and the data analyzed using an on-line analysis tool (word clouds are helpful);
2) Use students in the class to assist in the evaluation;
3) Do as much as you can with creative use of clickers (surveys, misconceptions, pre and post surveys, etc.

University of Colorado at Boulder CU Home CU Search CU A to Z Campus Map