“Clickers”: Electronic Audience Feedback in the classroom.

Michael Dubson, Physics Dept

(last updated Sept 17, 2003)

Since the Spring of ’02, the Physics and Astronomy Departments at CU have been using an electronic audience feedback system in several large freshmen classes.  During lecture, students answer multiple-choice questions (“Concept tests”) with personal electronic transmitters.  The system records how each individual student voted, so individual students scores are maintained. The system we use is manufactured by an Arkansas-based company called H‑ITT(Hyper-Interactive Teaching Technology); their web site is http://www.h-itt.com/

Here’s how the system works:  Students purchase a small infrared transmitter, called a “clicker”, at the CU bookstore for $29 each.  The bookstore will buy back clickers for half-price at the end of the semester. Clickers, which are the size of a magic marker, take two AA batteries.  Each clicker has a unique ID number stamped on it, and students register their clicker ID along with their name and student ID at a local web site, which is maintained by the Physics Dept: http://capa.colorado.edu/cgi‑bin/RegisterAFS

This web site produces a list of students and their clicker IDs, in a format suitable for use with the H-ITT software. 

The lecture hall is wired up with receivers arrayed around the room.  For votes to be recorded quickly, you need one receiver for every 25 students.  Each receiver costs $180, so, for instance, a 300-seat lecture hall requires 12 receivers at a total cost of $2160. All software is free and can be downloaded at anytime from www.h‑itt.com.  If you have a small class of 50 students or less, you might consider a portable system: bring in one or two receivers into the classroom, set them up on tripods, and hook to a laptop.


Other costs include :

1)   PC and cart (about $2000).  The receivers are daisy-chained and the last receiver in the chain is plugged into a PC, which controls vote-collecting, maintains records, etc.

2)   LCD projector ( $1500)  We strongly recommend that you use separate projectors for display of the clicker statistics and for course content.  As explained below, you need to display a clicker status screen continuously while questions are displayed.

3)   Receiver installation, cables, etc. (maybe $300 worth of cables, connectors, etc. for large lecture hall -- labor not included).

4)   Your time! Plan on an extra 5-minutes prior to the start of each lecture to set up/test the system.  The software is very intuitive, but it is powerful and flexible and takes a few days to get thoroughly used to. Another grade column in your spreadsheet (class-participation) requires organization and planning. Dealing with occasional student complaints/claims that the system is faulty.  All this takes time — not a lot, but it is non-negligible.  Some instructors have assigned one of their TA’s to be fully responsible for operation and maintenance of the system.

            During a typical 50-minute lecture, students are asked 4 to 6 multiple-choice questions (see Concept Test examples at end) and are given about 2 minutes to discuss and answer each question.  (We usually encourage students to discuss the question with their neighbors before answering.) During the voting period, a status screen is displayed which indicates which students’ votes have been received so far.  The screen shows a matrix in which each element corresponds to an individual student.  The cells contain the last few digits

of the clicker ID (it can also be set to display the student’s initials). Cells always have the same color and position in the matrix, so that a student quickly learns that she is the “green cell in the lower right: 050”. When a student sees her cell appear, she knows for certain that her vote has been recorded. 

            When everyone has voted, the instructor closes the vote and has the option of showing the class a histogram of the results. 

            After class, a versatile grade-keeping program allows the instructor to maintain grades and ship the results to excel.  My default grading scheme is this:  3 points for a correct answer; 1 point for a wrong answer; zero points for no answer (so attendance is rewarded).  A student’s clicker points are used to replace up to 10% of the exam score, if his average clicker score exceeds his exam average  -- hence clickers are not mandatory in my class – a student who does very well on exams is not penalized for failing to use the clicker)

            Are clickers a good thing?  Use of clickers has a strong positive effect on attendance.  In Phys2020, the first course in which clickers were used, average lecture attendance went from 65% (pre-clicker use) to 92% (post-clicker use).  Student sentiment is largely positive. In anonymous surveys, students complain about the cost of clickers, but overwhelmingly approve of their use and want the clicker points to count a larger fraction of the course grade.  Most important, in my view, clickers have had a strong effect on the level of student engagement. 

Prior to the use of clickers, I used colored cards to get audience feedback.  Every student had 5 different colored cards, and would vote on Concept Tests by holding up the correct card.  This would give me a quick, semi-quantitative idea of the level of student understanding, but nothing was recorded.  At the beginning of every semester, students were intrigued by this novel system and would vote enthusiastically.  However, within a few weeks, I always encountered “the fade”: the novelty wore off and students would stop voting unless I constantly cajoled them to participate.  With clickers, there is no vote fade; students vote all the time.

Miscellaneous comments:

·        At present, 9 lecture halls at CU have clicker receivers installed. Three rooms in Duane Physics (G1B20, G1B30, and G125) , Ramaley C250, Crystol Chemistry 140 and 142, Hale 270, and two rooms in Engineering (ECCR105, and ITLL1B50).  These installations have been funded by a combination of student fees, FTEP funds, and ITS support.

·        Unlike colored cards, clickers cannot be used to get an “instant” vote.  With 300 students, it takes a minimum of 45 seconds to get all the votes in.  The problem is that rapid voting causes a traffic jam: if two clickers are fired at the same receiver simultaneously (within 0.1 seconds), neither vote is recorded.  Hence the importance of a status screen showing students when their vote has been recorded.  Traffic jams don’t occur if the student needs a minute to think about the question before answering.  But you can’t ask a question like “Who saw Star Trek last night?”.

·        The clickers are cheap and not very robust, but they are now much more reliable than they were when we first started in Spring of ’02.  At present, perhaps 1 in 100 clickers will be defective.  When a student reports a dead clicker, it is usually just dead batteries.  Fresh batteries easily last one semester, but not two.

 Some Physics Concept tests:


Test your transmitter with this free question = full credit for any answer.


What is the cost to put a one-pound (1 lb) payload into orbit with the space shuttle?


A) $150

B) $500

C) $2000

D) $8000

E) $25,000









CT20-4.  Brass has a positive coefficient of thermal expansion. A ring (annulus) of brass is heated.  Does the hole in the middle of the ring get larger or smaller?



A: larger                       B: smaller                     C: stays the same



Use of Clickers in Physics 1020, Physics of Everyday Life.

Carl Wieman and Kathy Perkins - CU Dept of Physics


Phys1020 is the 2nd semester of a 2-semester introductory physics sequence, aimed at non-science majors. It is a “physics for poets” course. We used clickers in this course in the spring of 2003.


Types of questions.

 In this class the clickers are used for a variety of purposes, some of which are somewhat different than discussed by Dubson in the note above. 

1. Reading quizzes.  Students are assigned to read the text before lecture and are given three very easy questions at the start of class on the material.

2.  Confronting known misconceptions.  Students are asked questions where it is known most of them have some basic misconception and will give the incorrect answer.  There is then follow-up discussion or experiments and questions exploring the correct answer and the source of their misconception.

3. Predicting results of in-class experiments/demonstrations.  Before any demonstration is carried out, students are asked to predict what will happen. Then they see the results and we discuss why it came out that way, and, where appropriate, why their predictions were wrong.

4. Concept questions of the sort discussed by Mazur, although our questions are nearly always less abstract and more closely connected to real world situations. 

5. Testing understanding of material that has been covered.  After a topic or subtopic has been covered, questions are asked that check the student’s level of understanding before proceeding.  This can include simple quantitative questions. 


Types of clicker-enhanced student collaboration.

We found that the interaction between students was considerably enhanced when we assigned them seats and made each student a member of a 3 person group seated together.  After many of the questions, particularly those where there was a large spread in answers submitted, the students would be told to talk to their group members and come up with a consensus vote.  This assignment of seats and groups and requirement of consensus answers substantially increased the level of interaction over the standard unstructured peer instruction, particularly by the less interactive students. 


How clickers were included in grading.

Most clicker questions are not graded.  Students in class and using clickers to answer questions receive 3 points/class.  Occasionally one of those points will depend on whether or not they correctly answered a specific question, usually one late in the class that anyone paying attention gets correct, which is usually nearly all of the students.   Approximately once every 1.5 weeks there is a 3-question graded quiz on the assigned reading that counts for 3 points.  Over the course of the term, the clicker points make up a bit over 20% of the total points possible. 


Indications of success.

1. The class is dramatically more interactive with a large fraction of the class (10-15 out of 45) asking questions during a class period and the level of the questions is dramatically higher.  For example, they often are about extensions of the material to applications not covered during class, anticipating aspects of the topic to be covered later in the class, or asking about outcome/suggesting of additional experiments to be done with the interactive lecture demonstrations.  A large fraction (20-40 per cent) of the typical class period is spent discussing such student questions. 

2. Attendance is typically about 85%. This is clearly much higher, although pre-clicker numbers are not known.

3. The students rate the “lecture” as making a large contribution to their learning.  This is now among the highest rated elements in the class in terms of their assessment of  contribution to their learning, whereas with more traditional lectures it ranked lowest.  

4. Overall learning by a variety of measures is higher that before clickers were used, but several changes in addition to use of clickers were made, so it is impossible to distinguish what fraction of this gain was due solely to the clickers.

5. There was dramatically higher retention of information that was presented in the form of a clicker question and its answer.  As a baseline, clicker questions were asked on non-obvious material previously stated verbally in class.  Typically, 10% of the students would answer such questions correctly in spite of having been directly told the answer approximately 15 minutes earlier.  In contrast, several times clicker questions on particularly difficult topics (where most of the students had at first answered incorrectly) were repeated in subsequent classes two days later.  Virtually everyone remembered that the question had been asked previously, and approximately 90% (which could well have been 100% of the students who had attended the previous class) then answered the question correctly.