Department of Political Science
Knowledge and skill goals for this undergraduate degree program are recorded in the most recent CU-Boulder catalog.
In some summaries of assessment activity, goals are referred to by number (e.g., K-2 is knowledge goal 2).
Assessment Activities in 2000-01
This report summarizes the Department's ongoing process of evaluating undergraduate degree program performance. During the 2000-01 period, the Department of Political Science looked at departmental successes and failures primarily by reassessing the careers of graduating seniors, but also by looking briefly at the Honors Program and taking stock of recent surveys of graduates. Based on these assessments as well as extensive discussion, the Undergraduate Curriculum Committee concludes this report with some recommendations about how we teach our undergraduate curriculum.
I. Portfolio Assessment
In 2001, the UCC performed two tasks: reviewing a sample of portfolios of graduating seniors and collecting portfolio materials for continuing majors, both by gathering tests and essays from a new cohort of freshman majors begun last year and by getting information on new and continuing majors.
These portfolios contain a history of student performance as indicated by final exams and papers finished between Fall of 1996 and Fall of 2000. Identifying information was deleted from this material, leaving course number, assignment, and semester in which the work was done. The professor's name and grade assigned, if originally indicated, were also removed. Three class tests and papers constituted a minimum portfolio; most portfolios had four to six assignments. Six portfolios were evaluated, totaling 27 exams and essays altogether. This was a randomly drawn sample, with GPAs varying from 2.1 to 3.5. The sample included those who showed consistent academic performance as well as those who showed improvement.
Each student's written work was evaluated on four characteristics plus an estimated GPA. Those characteristics were:
Students were rated on achievement at the end of their fourth year of examinations and papers as well as on improvement over the entire period. Achievement was rated on a scale running from "Excellent" to "Good", "Fair", "Poor", and "Not Applicable" when there was insufficient evidence, e.g. when an assignment failed to require a particular intellectual task such as making comparative judgments. "Improvement" reflected a global impression of a student's performance across all four characteristics. Raters were asked simply to indicate whether they perceived any improvement in performance between students' freshman/sophomore and junior/senior level course work.
The complete set of scores for the set of portfolios is shown in the table below. The mean rating of student performance in all categories of evaluation falls in the "Fair to Good" range. The scores also reveal that students did somewhat better on knowing the facts than on making comparative judgments, which is to be expected for at least three reasons: 1) comparative judgments are among the hardest critical thinking skills to develop; 2) some of the exams failed to ask for such judgments; and 3) students tend to "duck" such judgments when they can, as factual judgments are much easier for students to give and to hide behind. A closer look at the ratings of particular students also reveals two encouraging features.
First, there is clear improvement in academic performance in the case of three students, with some disagreement about the improvement of a fourth. Of the two students who failed to show improvement, one started out with a very good record and maintained it, while another started low and remained low. In general, examination of grade histories suggests that career trajectories are flat for most students in college. Since most students are quite consistent in their college careers, the fact that only one student in this sample started low and remained there is a positive indication that the Department is succeeding in teaching critical thinking skills to its majors.
Second, a closer look at the ratings of particular students also show little variation in the different raters' scores, suggesting that the raters were consistent in the standards of comparison and improvement that they employed. This indicates that the assessment of portfolios can give a reliable picture of critical thinking skills and improvement.
II. Honors Program
In 2001, the Department of Political Science graduated 14 students with honors degrees (2 Cum Laude, 7 Magna Cum Laude, and 7 Summa Cum Laude). This total is only slightly off the largest number to graduate with honors in Political Science in a single year (18 in 1999) and still represents a 39% increase over the number of 1997-98 honors graduates.
Building on the outcomes assessment of 1999, we can put these results into a broader historical picture. The number of honors graduates doubled immediately following the initiation of the outcomes process in 1989 and doubled again over the last decade. The record in 2001 shows that we remain at virtually the same high level. Yet during roughly the same decade, we have seen a significant decline in both Political Science majors and faculty of FTEs. The number of majors has declined by 31%, from a high of 873 in 1989 to 674 this year; there were also only 16 faculty members of the department on campus throughout the year (from a high of 28!). Even so, from 1991 to 2001 the number of honors graduates rose by 64% and the number of faculty working with honors students has grown significantly. Directors of the departmental honors program during the last ten years deserve credit for this success.
This increase is important because the Honors experience is a demanding one which teaches students critical thinking skills. Students must show strong class performance in the first three years, and honors thesis research and writing involve a tough set of requirements. Students who go through the honors process tend to look back on it as the defining part of their college career. The honors experience also requires of students a unique oral defense that we think should become a larger part of undergraduate education. We are glad to be offering more students this kind of educational opportunity.
Senior surveys conducted by the Department are not available for this year, but will be done this coming Fall. In addition, although the CU Office of Planning, Budget, and Analysis does surveys of graduating seniors each year, these surveys are not specific to particular majors. Fortunately, this year CU also participated in the National Survey of Student Engagement (NSSE), which asked seniors and freshman at 276 U.S. colleges and universities about experiences and skills acquired during the course of their education and about their academic and non-academic activities. In spring 2000, 700 freshmen and 1500 seniors at CU were invited to complete the NSSE and 346 freshmen and 572 seniors completed surveys on the web. In addition to the 800 seniors included in the base sample, an additional 700 CU-Boulder seniors in the twenty largest majors were over-sampled to allow a characterization of those majors separately.
Full results are shown graphically at the PBA web-site. However, in the "Highlights for Arts and Sciences Majors" section, the Department of Political Science is singled out. The relevant part of these "Highlights" reads as follows:
Political Science majors rank highest among all majors on general satisfaction. Political Science majors are most likely of all A&S majors to say they would attend UCB again if given the chance. . . Political Science majors also rank advising, course availability, and acquisition of a broad general education quite high, and they report the greatest use of open-ended exams. Political science majors believe the university has contributed to their non-academic life as well by helping them learn on their own, fulfill their civic duty by voting, and contribute to the welfare of the community.
Progress has been made this year in adopting a more standardized numbering scheme to inform and constrain student enrollments in an orderly fashion. While International Relations, Political Theory and Comparative Politics have done some rethinking of their course schemes and prerequisites, American and Public Policy have been waiting until some important hiring decisions have been made. This task needs to be completed, however, as does the dropping of old, untaught courses and the adding of new ones. Restricted access to some courses also needs to be reconsidered.
B. Restructuring Examinations and Large Classes
In reading the portfolios, committee members noticed that some exams and papers focus too much on factual questions or requests for a summary of class materials. If the Department wishes to focus on teaching critical thinking, the faculty needs to design exams and other assignments that demand this sort of thinking from students. As part of trying to improve the design of these assignments, the committee offers two recommendations.
First, it is suggested that large lecture courses more explicitly incorporate a critical thinking component in their syllabi and that the Department employ graduate teaching assistants in teaching this component. Graduate teaching assistants might be used, for example, in working one on one with students to write and rewrite short papers directed at making critical, comparative evaluations of conflicting arguments.
Second, it is suggested that the Department devote more time to discussing the teaching of critical thinking in all classes, whether in regular Departmental meetings or Departmental retreats. Such discussion would allow faculty to help each other refine how critical thinking can be taught in ways that are appropriate to different kinds of classes. More importantly, this sort of discussion helps strengthen the Department's commitment to critical thinking. The Department should keep moving away from testing for basic descriptive facts alone. Instead, it should ask students for critical comparisons of different ways of interpreting empirical evidence, as well as for critical judgments about its meaning and value.
Ratings of Individual Students, 2000-01
|Student||Faculty Rater||Understands Issues/ Factual Knowledge||Evaluates Conflicting Arguments||Uses Appropriate Evidence/ Reaches Reasoned Conclusions||Overall Writing Ability||Shows Improvement Over Time||Est. GPA||Actual GPA|
|Total Average||2.7||2.4||2.4||2.4||3 = Yes
2 = No
1 = Some
Assessment Activities in 1998-99
This report summarizes the Department's ongoing process of gauging undergraduate degree program performance. During the 1998-99 period, the Department of Political Science examined departmental successes and failures in three ways, by considering the historic development of its honors program, by reassessing the careers of graduating seniors, and by taking stock in the recent surveys of graduates. Based on these assessments, the Undergraduate Curriculum Committee (UCC) proposes a series of changes, both in the outcomes assessment process and the ways in which we teach our undergraduate curriculum.
In 1999, the Department of Political Science graduated 18 students with honors degrees (Cum Laude, Magna Cum Laude, Summa Cum Laude). This total represents the greatest number of students to graduate with honors in Political Science in a single year and represents a 50% increase over the number of 1997-98 honors graduates.
Looking at the broader historical picture, the number of students graduating with Honors in Political Science has increased significantly during the past decade. (Figure 1) The number of honors graduates doubled immediately following the initiation of the outcomes assessment process in 1989 and then doubled again over the past decade.
These increases are particularly impressive for they come despite the significant decline in Political Science majors and declining faculty FTEs. From 1986 to 1999, the number of honors graduates is up 260%. Over the same period the number of majors have decreased by 35% (from a high of 873 primary majors in 1989) and the tenure track faculty have been slashed from 28 to fewer than 18 on campus in 1998-99. The consequence of all of these trends is that the number of faculty working with honors students has grown dramatically.
This dramatic shift is not important simply for the impressive increase in the numbers of students graduating with honors. The Honors experience reflects a degree of dedication unlike that associated with any single course. While embarking on the course toward Honors requires acceptable class performance in the first three years, the thesis research and writing requires much more and offers a much more intense attack on a question of important and intellectual interest. Students who engage themselves in the thesis finish this project believing it is the defining accomplishment of their college careers. Further, as will be discussed below, the honors examination places the student in a unique oral defense and debate that is unlike anything they experience elsewhere in their undergraduate education. These signature characteristics of the honors experience have made this program an increasingly important part of the education of our best students. Happily we are offering more students the opportunity and more are accepting the challenge.
Portfolio Assessment, 1999
Rating the Students
Two tasks were completed during the 1999 assessment process. First, the Undergraduate Curriculum Committee reviewed the portfolios of a sample of graduating seniors. Second, the UCC continued the process of gathering portfolio materials for continuing majors -- filling in both a new cohort of freshman majors begun last year and continuing to gather information on new and continuing majors.
These portfolios contain a history of student performance as indicated by final examination and written papers completed between the Fall 1996 and Spring 1999 semesters. Work contained in the portfolios are stripped of identifying information other than course number, assignment description, and semester in which the work was completed. The professor's name and the grade assigned -- if originally indicated on the paper or essay examination -- were removed prior to UCC evaluation. Three class tests and papers collected over three years constituted a minimal portfolio for assessment. Most portfolios contained materials from 6 or 7 classes. One portfolio included papers and test from 10 different political science classes.
Portfolios of a sample of 14 graduating seniors were examined. (Transfer students who had not completed at least 3 full years at CU were excluded from the population of graduating seniors under consideration.) This sample represented a full range of graduating seniors from students graduating with honors to those who were only minimally successful. Graduating GPAs varied from 2.19 to 3.88. The sample also included students who demonstrated consistent academic performance throughout their career as well as those who showed significant improvement or decline following their first full year at the University of Colorado. While for many students the freshman year performance is predictive of their later track record, some students improved as much as a whole letter grade in their last three years.
Each student's written work was evaluated twice on four characteristics:
Students were rated on achievement at the end of their fourth year of examinations and papers as well as on improvement over the four year period. Categories of achievement rated were (1) Understands issues/Factual knowledge, (2) Able to evaluate conflicting arguments when asked to do so, (3) Uses appropriate evidence/Reaches reasoned conclusions, and (4) Overall writing ability. Achievement was rated on a scale running from "Excellent" to "Good," "Fair," and "Poor." "Not Applicable" judgments were made where insufficient evidence was available to make the necessary judgment. This most typically happened when examinations or paper assignments failed to require particular intellectual tasks like comparative judgments. Raters were also asked to evaluate "Strengths/Weaknesses in written work. Improvement proved to be a difficult characteristic to identify. Raters were asked simply to indicate whether they perceived any improvement in student performance between their freshman/sophomore and junior/senior level coursework.
The aggregate picture established by the evaluation of the set of portfolios is presented in Table 1.
Table 1. Outcomes Assessment/Senior Portfolios, Spring 1999
The mean rating of student performance falls in the "Good" range for both understanding issues and having a firm grasp of the factual information necessary to make judgments about political issues. Similarly, the ability to make reasoned judgments is rated to be in the "good" range. Lower scores ("Fair-Good") were given by faculty raters to ability to make comparative judgments and in overall writing ability. The UCC believes that comparative judgments are among the hardest critical thinking skills to develop. But we also agreed that two additional factors play a key role in making this a difficult object for assessment. First, many of our examinations and papers fail to ask for such judgments by the students. Second, students display a tendency to "duck" required judgments in answering examination questions. Factual answers are much easier for students to compose and many hide behind such answers when called on to make difficult judgments.
Examination of grade histories suggest that career trajectories are by and large relatively flat. That is, most students are quite consistent during their college careers. "A" students are visible early in their college careers. Likewise, students who perform poorly as freshman rarely show significant improvement. Nonetheless, there are exceptions to this rule. Some students are "late bloomers" who do better and better as they adjust to the college experience. Unfortunately the opposite trend can also be found. A few students start strongly but perform less well over time. UCC raters found these general patterns held up in the sample of portfolios we examined. For the majority of students reviewed, two of the three raters did not discern significant change in their writing and critical thinking abilities although clearly many have significantly expanded their information base over the course of their college careers. Depending on the performance characteristic assessed, between 20% and 50% of the students were judged to show significant improvement over time.
Rating the Raters
All historical information on student grades was hidden from the members of the UCC until after their assessments of the portfolios were completed. Following the UCC assessment of the portfolios, classroom grade performance and portfolio assessments were correlated to see if these evaluative schemes identified common performance. Two questions motivate this comparison. Do portfolio assessment differ from the classroom assessments of individual faculty who saw the students on a day-to-day basis? If they do, what does that tell us about the classroom assessment or the portfolio assessment? Ability to discriminate improving students from stable students offers some face validity for the portfolio process. If the portfolio assessment and GPA picture are not consistent, we need to ask whether this is a problem and if so, is it a function of failures in the portfolio evaluation process.
There is good news and bad news in our evaluation of the portfolio assessment process. First the good news: The mean faculty ratings of student performance on factual knowledge and understanding of issues, critical thinking, use of evidence in argument, and overall writing ability correlated both with the raters estimate of the students' GPA (.66) and, more importantly, with the students' actual GPA (.49). These characteristics are important for student success. Even more, the best predictor of academic success is critical thinking ability and this finding justifies our emphasis on critical thinking as a key departmental goal. (The correlation between faculty assessment of critical thinking ability and actual GPA is 0.79!)
The bad news: In assessing the performance of these 14 students, faculty raters often disagreed. There is little evidence of strong inter-rater reliability in the weak mean bivariate correlations between faculty ranging from .31 to .42. Mean correlations between faculty assessments of factual knowledge, critical thinking, use of evidence, and writing ability ranges from .18 to .44. Surprisingly, student understanding of issues and their level of factual knowledge appeared to be the most difficult item on which to find agreement among the raters. Further, for a number of students, estimating the quality of critical thinking proved impossible because either the assignments did not ask for such judgments or the student ignored the instructions to provide such judgment. Additional evidence of the failure of portfolio assessment is the near zero correlation between raters' estimates of student grade point average: faculty raters simply disagreed over who was an A, B or C student (r = 0.16).
More bad news. The second assessment made of each portfolio was whether or not the student appeared to make progress over the course of his or her 4 year CU undergraduate career. Committee judgments were compared to each other and to the students' GPA changes between their first year at CU and later academic performance. The results of our assessment efforts were somewhat frustrating. The committee members did not agree as to which files showed evidence of progress over the four year period. Two raters identified a minority of the files as showing improvement but in only one case did they agree. The third rater judged improvement to be present in almost every instance. Furthermore, the committee assessments of improvement do not coincide with student GPA improvements or declines.
Overall, while the portfolios paint a reasonable picture of student performance, reconsideration of the rating process suggests caution be used in any future outcomes assessment based on portfolios. Not only are assessments of the portfolios difficult, they may not provide reliable information. Further, other than pointing the committee toward the assessment of the final examination process next year, portfolio assessment serves as an ambiguous guide to policy recommendation.
Senior surveys are conducted by the department. Department surveys are completed by over 90% of graduating seniors as part of graduation checkout with the department undergraduate advisor.
More extensive senior surveys are also conducted by CU's Office of Planning, Budget, and Analysis (PBA). These surveys are subject to self-selection bias: they are returned by only a small sample of graduated seniors. Nonetheless, the picture painted by these returns is consistent with both department exit interviews conducted as part of graduation checkout and impressionistic information drawn from personal experiences of the members of the UCC.
Table 2 reports on selected items reported to PBA on the senior surveys conducted in 1996 and 1998. Three basic areas of emphasis are disclosed. First, how well do seniors report that they are prepared for life after college. Respondent were asked to rate both what they have learned in their college major (HAVE), and to assess how important their skills are for their chosen career paths (NEED). Second, former students were asked to indicate their satisfaction with their experiences at CU. Finally, each graduate was asked about changes they would recommend for the program from which they had just graduated.
Table 2. PBA Senior Survey Results, 1996-1998
These surveys highlight some of the weaknesses in the program as perceived by graduting seniors. In terms of achievements versus needs, these respondents see themselves less well trained in speaking publicly. Similarly, they offer the self-evaluation that they are less well trained in "leadership" than they need to be, an assessment that may be related to lack of experience in oral presentation. Students also clearly see themselves as weak on quantitative methods, but perceive this as less of a problem because they believe it's less relevant to their career choices.
Lower levels of satisfaction are expressed about advising, course availability, and faculty concern for students. These themes are expressed again in the "What Would I Change?" responses. The most dramatic change over time may be in perceptions about the availability of courses. Last year more than 50% of the graduating seniors commented on the problems they faced in getting courses to complete their major.
We choose not to make much of the changes over time in these surveys but one trend is less positive and warrants future consideration: student evaluations of instruction at all levels and the overall academic experience have declined slightly. Combined with greater frustration with the availability of courses, this "dissatisfaction" trend merits watching and may well reflect the difficulty the department has in staffing its full curriculum without adequate numbers of full time faculty.
1. Assessment of knowledge components of program
Past outcomes assessments have focused on evaluation of critical thinking and curriculum development. These activities have addressed published departmental goals for our graduating seniors: the ability to evaluate conflicting arguments, to assemble and present empirical evidence and to make reasoned conclusions from the evidence available. In conducting these assessments, the department has also focussed on writing skills and made an effort to ensure that GPTI participation as teaching faculty does not lead to a decline in student learning.
But past assessments have not considered other important department goals: knowledge upon which critical thinking is based and oral ability to communication both knowledge and analysis. The 1999-2000 catalog identifies many of the components of knowledge to be communicated by our major curriculum requirements:
We recommend the department establish biannual standardized testing of a scientifically drawn random sample of graduating seniors. This test should be conducted during the spring semester of the senior year. This test permits assessment of the range of concepts, theories and facts acquired by students in their major field of study. The standardized test should allow us to compare the performance of our majors to similar students at other universities. The regular administration of this test should allow us to track improvement in student performance During the 1st year of this program, a sample of freshman majors should also be tested to provide a baseline for later follow-up.
2. Oral Skills Training
Early in this report we highlighted the substantial increase in the number of students winning honors through a program of intensive study, thesis writing, and oral examination. One of the "complaints" that honors orals examination committees often hear from students in response to the examination process is that the oral exam is the only opportunity they've had to speak and be questioned in such a rigorous fashion.
The PBA senior survey data also highlight the weaknesses our students feel in their preparation for the post-graduate world. The biggest deficiency students identify is "speaking publicly".
We propose that the department consider requiring oral public defense of written work as part of the critical thinking requirement. But we would go further and encourage oral skills be highlighted in other classes. Systematic study should be made of the use of recitations in lower division courses. Additional emphasis should be placed on the oral component of recitations in our introductory courses to "socialize" students to greater participation. Upper division seminars should incorporate additional oral skills training via class presentations, small group interactions, and question and answer sessions that involve all students.
3. Restructuring Examinations
If we value "critical thinking" -- defined broadly as the ability to evaluate conflicting arguments, assemble evidence and draw reasoned conclusions -- then we need to create this expectation among students. One way to do so is by requiring such skills be shown on the examinations and other graded assignments that produce student grades. There are actually two components
Consideration of many of the examinations reviewed as part of the portfolio outcomes assessment process left the UCC with the sense that too often we test for basic descriptive facts without requiring theoretical integration of those facts, judgments about their value and meaning, or comparison of different interpretations of fact situations. We suspect but cannot demonstrate that students deliberately avoid these critical thinking challenges. Despite examination instructions, answers often do not reflect comparative judgments, theoretical evaluations, and use of appropriate evidence. We suspect further that this strategy has proven to be "rational" because the grading standard has not enforced the critical thinking requirement if facts are correct.
4. Prerequisites and Degree Program Progress
Despite the establishment of course sequencing, the standard numbering scheme does not seem to inform or constrain student enrollments in any orderly fashion. Some students proceed through the major by taking 3000 and 4000 level course prior to completing introductory courses. Senior political science majors can be found in PSCI 1101.
The UCC realizes that student progress is in part a function of inadequate instructional staffing in recent years. With 6-7 empty faculty lines and a large number of faculty of leave, students have been forced to make progress by taking any available course. The UCC believes this situation cannot sustain a degree program over the long run and faculty lines must be filled ASAP.
Assessment Activities through 1996-97
The Undergraduate Curriculum Committee (UCC) oversees undergraduate outcomes assessment in the department.
The UCC states that the critical core of university education in political science (or in any department) is "not learning facts; it is learning to think critically about statements that purport to be facts."
Over the years the UCC has used several methods to assess student learning, including
Notably, the portfolios were useful for examining not just student achievement, but what students will be asked to do, both in individual courses and over their career as Political Science majors. In contrast to earlier assessments which used student exams or papers as the unit of analysis, the portfolio method focuses our attention on the whole pattern of instruction, the progression of skills over his/her career at CU, and the overall quality of his/her work.
In 1995-96 the UCC expressed concern that departmental standards about what constitutes critical thinking are "not very clear." Its 1996-97 work was devoted to clarifying these standards.
Based on 1996-97 and past assessment results, the UCC has made three recommendations for further study and for departmental action
1. The UCC should discuss and recommend to the department a "set of standards specifying our common understanding of the critical thinking skills that we seek to teach our majors."
2. Start early in teaching thinking skills
3. The department should commit itself to practice critical thinking skills in all its classes
History of department activities
In arriving at the current system of outcomes assessment, the department systematically tested several outcomes assessment methods in the first few year of the project, finally settling on a plan that involves longitudinal study of students moving through the program and periodic evaluations of specific areas identified by the more general assessments.
Initial explorations:The first three years' assessment programs (1989-90 through 1991-92) involved external reviewers' evaluations of senior majors' work. In 1989-90, the reviewers saw a random sample (n=47) of answers to selected essay questions from final exams in American politics, comparative politics, international relations, and political theory. In 1990-91, 84 seniors answered essay questions written by department members in the four areas; each student selected one of the four questions. The 1991-92 assessment focused on the political science major--whether the curriculum taken as a whole seemed to be meeting the goals. Outside evaluators examined a wide range of seniors' work on exams in 4000-level courses, ranging from short answer to longer essay questions and spanning the range of our course offerings. The evaluators were also given information about the curriculum, the classes, and the examinations.
The 1989-90 evaluators were Dr. A. diZerega, Fellow, Institute of Governmental Studies, University of California at Berkeley, and Dr. Desmond S. King, London School of Economics and Political Science. The 1990-91 and 1991-92 evaluators were from institutions and political science departments more comparable to CU-Boulder. They included (1990-91) Drs. Bettina Brickell, Indiana University; Gregg Kvistad, University of Denver; Anita Mercier, Columbia University; Brenda Horrigan, University of Denver; and (1991-92) Drs. David Olson, Univ. of Washington; Cal Clark, Auburn University; James Ray, Florida State Univ.; and Richard Dagger, University of Arizona.
In all three years, the evaluators reported that the students did well with factual material and key concepts but needed more practice and training in analytic methods and thinking. These suggestions, and others such as a need for introductory comparative politics and political theory courses, had also been mentioned in the department's most recent program review. They were addressed in ongoing curriculum revision plans.
The 1990-91 assessment program also experimented with a nationally standardized knowledge test in political science. Seniors could take either the essay exam mentioned above or the nationally standardized Major Field Achievement Test (MFAT) in political science. 124 chose the MFAT and 84 took the essay exam. The average scores on the MFAT were close to or above the national average, overall and for each of the test's subareas. The undergraduate curriculuum committee (UCC) noted, however, that the institutions making up the MFAT norm group were not necessarily a good comparison group for CU-Boulder, and that MFAT did not evaluate the very area, analytical skills, in which the other assessment methods revealed problems.
Graduate-student part-time instructors: 1992-93
Past faculty course questionnaires (FCQs) and a 1991-92 senior exit survey administered by Student Affairs Research Services suggested that GPTI courses might be a possible weak link in the curriculum. The 1992-93 outcomes assessment examined these courses. Members of the UCC evaluated final examinations and papers from almost all political science majors enrolled in the four GPTI-taught courses in fall 1992. One was an upper-division course and three were introductory level courses. Each examination/paper was scored on a 1-7 (1=worst, 7=best) scale for the relevant departmental goals. The scores indicated that the students in GPTI-taught courses did reasonably well, with ratings above the midpoint of the 7-point scale at the appropriate course level. As in prior years' evaluations, the students did best on factual knowledge and basic concepts and worst on analytic skills.
Actions already planned by the department to address majors' analytical skills include a one-hour requirement on statistical methodology and quantitative reasoning. In addition, GPTIs are asked to enroll in the Graduate Teacher Program workshops and the department has begun intra-departmental workshops and a departmental colloquium on teaching in which graduate students have actively participated.
Longitudinal portfolio evaluation
First study: Beginning majors, 1993-94
The UCC evaluated a sample of approximately 50 final exam essays and papers written by beginning majors in spring 1994 lower division political science classes. Analytical reasoning and writing skills were rated on four-point scales, where 4=excellent and 1=poor. At least one section of each course required for the major was included. Each essay or paper was evaluated by two members of the committee; the two scores for an essay were seldom more than one point different.
The students' analytic skills were roughly what would be expected for students at this stage of their college education. Roughly 15% of the essays evaluated were excellent. At the other extreme, just over 10% were rambling and generally incoherent. Most of the rest were distributed evenly around the mean of 2.6, i.e. fair to good.
Writing skills were, on the whole, better than the UCC expected at this level. 20% of the essays were rated as excellent and another 35% as good. Another 20% were rated as fair. Only 4% were rated as poor. The major weakness seemed to be in organization and structure.
The UCC recommended that all lower division courses include some assignment emphasizing analytical reasoning, more teaching assistant exercises that give students feedback on their reasoning and writing performance, and active referral of students with serious writing problems to the University Writing Program or the Academic Skills Center. In addition, the committee recommended that the department specify more clearly the analytical reasoning skills it wants majors to develop.
First study: Graduating seniors, 1995-96
The UCC obtained a copy of each sample member's full university transcript, to examine the students' course-taking patterns relative to the program's curriculum goals. The committee also read the students' portfolios, noting the nature of the assignments--e.g., whether or not they required the student to evaluate competing arguments--and the student's performance. Two to four UCC members read each portfolio and independently rated the student's overall performance as excellent, good, fair, or deficient with respect to each curriculum goal. They also rated the student's progression from lower-division to senior year work in terms of factual knowledge, analytical skills, and writing skills.
The vast majority of students in the sample had patterns of coursework that seem reasonable in terms of the department's knowledge goals, with a concentration in one area (usually American politics--CU-Boulder's International Affairs major is an alternative for students whose primary interests are international or comparative politics), often in combination with a minor or concentration in a second field. The data do confirm that the adoption of course prerequisites several years ago had the desired effect. Students very rarely took an upper division course before the appropriate lower division introduction, and most finished the introductory classes as freshmen or sophomores.
Given the complexity of the rating task, there was surprisingly close agreement among raters. 35% of the students were rated good to excellent on most goals, and another 29% were rated fair to good on most goals. In virtually every case, exams and papers from upper division courses showed greater scope and sophistication of factual knowledge than those from lower division courses. There was general consensus that analytical skills also improved in at least 70% of the sample. Evaluators' comments indicate a progression from weak analytic skills (e.g., simply repeating back text material) to moderate or mixed performance on the more complex questions found on upper division exams. Progress on writing was more varied, and students with extremely good or extremely poor writing ability at the start often did not change much. Consistent with previous assessments, students generally do better on the knowledge goals than on analytical skills or writing.
On the whole, these majors made the kinds of progress we would expect over three years of college. There was also substantial individual continuity--those who did well as sophomores were almost certain to do well as seniors, while those who were just scraping by as sophomores were usually not doing much more than that as seniors.
Close examination of the results suggests that the department could benefit from a dialogue about what the critical thinking (CT) courses are supposed to accomplish. It is clear, for example, that that the particular skills learned depend more on the particular CT course taken than the evaluators would have liked, and that faculty differ in their interpretations of what critical thinking is. The evaluators do not suggest a single departmental definition, but rather more discussion among the faculty teaching CT courses, and evaluating student outcomes, about objectives, teaching techniques, and how to evaluate student critical thinking performance. The outcomes assessment, too, could benefit from more discussion among the judges and more precise specification of criteria.
The assessment also suggested that all faculty, not only those teaching CT courses, should be encouraged to give more attention to and practice in evaluating competing positions.
From our experience with the portfolio pilot project, the UCC suggests that we start a new sample of beginning majors for whom we collect all exams and papers in both fall and spring terms. This will help make the portfolios easier to compare with each other. With more complete portfolios, it will be too difficult and time-consuming to evaluate students on all goals at once, so the faculty should set priorities for outcomes assessment focus in various years. For each year's selected goals, the UCC should specify more explicit criteria for the ratings.
1996-97 portfolio assessments
During the 1996-97 outcomes assessment process, portfolios from twelve graduating seniors representative of the graduating class were reviewed. Each folder was evaluated anonymously by two of the four faculty reviewers. Each student's work was evaluated in terms of critical thinking skills as well as on criteria drawn from the department's goals statement. Each dimension was rated on a four point scale.
The results reveal a range of critical thinking, knowledge and skill abilities , with four of the twelve receiving good-excellent overall ratings, five a good or fair-good rating, and three receiving unsatisfactory in some categories.
The UCC also used the portfolio evaluation process to focus on ways in which critical thinking is demonstrated in student work, and on types of student assignments that require more or less critical thinking on the student's part.
The outcomes provide more than a means of tracking the average student achievements. They allow the department through the UCC to plan specific means of improving basic skills and developing more sophisticated analytical abilities among its students. In particular the process has served in the planning of specific changes to emphasize the vital importance of critical thinking skills throughout the program.
Analytic thinking in senior-level critical thinking courses: 1994-95
The 1994-95 assessment focused on students' analytic thinking, using work from six senior-level critical thinking courses. The courses selected represent all of the undergraduate subfields except International Relations. A random sample of 35 PSCI majors was selected from students enrolled in these courses and each student's final paper or exam was anonymously reviewed by four members of the UCC. (Not every essay was read by all four--faculty were not assigned essays from their own classes). Each essay was rated separately on the student's ability to a) evaluate conflicting arguments, b) use empirical evidence, and c) reach reasoned conclusions. A 4-point scale was used, where 1=unsatisfactory and 4=excellent. Each evaluator also wrote a short general assessment of the strengths and weaknesses of the essays as a whole and his/her suggestions for improving the outcomes assessment procedures.
Both the quantitative and qualitative evaluations pointed to "evaluation of competing arguments" as the students' weakest area. The average rating was only 2.1, or "fair," and the ratings clustered in the "weak" (1.5) to "fair-to-good" (2.5) range. The evaluators felt that the students did somewhat better at presenting evidence for their arguments (average rating 2.5, or "fair-to-good") and better still at drawing reasoned conclusions (average rating between 2.5 and 3). Even the students who did poorly on this criterion seemed aware that they needed to explain and justify their positions, although they may not have done it well.
Senior and alumni surveys: 1994-95
The 1994-95 assessment process also included an anonymous exit survey in which graduating seniors were asked to rate their undergraduate major in terms of the department's curriculum goals. The survey is part of graduation checkout and response rate is normally 90% or more. Two-thirds or more of the ratings were "excellent" or "good." The highest proportion of "excellent" responses was for knowledge of American politics; the largest number of "fair" and "poor" responses was for knowledge of other political systems, which may reflect the fact that most students with strong interests in this area choose the International Affairs major rather than Political Science.
A survey of alumni was conducted as part of a formal program review in 1994-95. Respondents rated oral and written communication as the most important aspect of the major, followed by analytical reasoning skills and knowledge of and ability to evaluate policy issues. In terms of achievement, they rated their undergraduate training highest in oral and written communication and in knowledge of American politics.
Last revision 05/05/03
PBA Home | Strategic Planning |  Institutional Research & Analysis |
Budget & Finances | Questions? Comments?
15 UCB, University of Colorado Boulder, Boulder, CO 80309-0015, (303)492-8631
© 2001, The Regents of the University of Colorado