PBA Home >
Institutional Research & Analysis >
Performance Measures >
CCHE Quality Indicator System (QIS)
CU-Boulder Fall 1998 Submission
State indicator 6: Assessment and accountability practices at CU-Boulder
CU-Boulder pays serious attention to evidence about student completion
rates, learning, satisfaction, and success after graduation. We collect
information on these measures annually or biennially through several formal
assessment programs, and study the resulting time series for patterns of
slippage and improvement. Academic programs, student affairs units, and
the campus as a whole have as a consequence made changes in curriculum,
teaching, advising, processing students' business transactions, admissions
communications, and many other areas. We hope to increase emphasis on use
of assessment results in the coming years.
Entries A-F are CCHE-specified areas for assessment.
A. Student learning
Outcomes assessment examines the quality and effectiveness of academic
programs through examination of student learning. At CU-Boulder the formal
assessment of student learning, in place since 1989-90, helps faculty in
individual academic units
- evaluate their curricula
- plan improvements where necessary
- evaluate the effects of the changes.
CU-Boulder is known as a national leader in outcomes assessment, in some
part due to our comprehensive assessment
web site. This site has information on history, methods, results, use
of results, and summaries
of activities for individual units.
In academic year 1987-88 academic units developed formal statements of
knowledge and skills goals for undergraduate majors. These are now published
in the catalog; to view them check the individual
Each year, faculty in schools, colleges, and departments
- implement assessment plans reported in previous years
- modify their programs, goals, and assessment processes as necessary based
on formal assessments, student satisfaction data, retention and completion
data, placement rates, and other information
- report what they found and what they did, or were planning to do in response.
Units use different assessment methods. For example
- Our writing program collects representative samples of students' initial
and final essays for evaluation by a panel of instructors and outside experts.
- Engineering, computer science, mathematics, and sociology compare their
majors' performance on nationally standardized exams with national norms.
- History annually selects 15% of the papers submitted by senior majors in
upper division courses for evaluation by three-member subcommittees of
the department's undergraduate studies committee.
- Journalism interns are rated by their supervisors, who are working professionals
in the field
Units make changes in curricula and teaching based on assessment results.
- Chemistry/biochemistry added more experiments and hands-on experiences
to lower-division courses and increased the number of majors doing independent
study research projects by 60%.
- Theatre and dance revised the sequence of courses in theatre history and
dramatic literature and now emphasizes these areas in the senior seminar.
- Classics added more sight-reading exercises in introductory Latin and Greek
courses. As a result, translation grades in those courses rose noticeably.
- Mathematics added a required upper-division course in modern algebra.
- Sociology strengthened methods and statistics skills with a three-semester
All students awarded doctoral degrees complete comprehensive examinations
before admission to candidacy, write dissertations signed by no fewer than
two faculty members, and pass a public oral examination conducted by at
least five individuals including at least three faculty members and one
individual from outside the major department. These requirements ensure
that degree recipients meet graduate school goals--that is, demonstrate
proficiency in a broad subject of learning and an ability to critically
evaluate work in the field, and make a significant original contribution
to the advancement of knowledge.
Most students awarded masters degrees complete theses signed by two faculty
members. All pass a comprehensive examination given by three faculty members.
As above, these requirements ensure that student learning is monitored
by faculty on an ongoing basis campus wide.
See the catalog
for more details.
B. Student persistence and completion of educational
CU-Boulder's enrollment management team and its subcommittees use information
on freshmen entering since 1980, and transfers and graduate-level students
entering since 1988, to monitor persistence and graduation rates. We compare
these rates over time and with other public research institutions with
similar students. Much of this information is posted to a web
site on retention and graduation rates.
A dip in undergraduate persistence rates in 1992 prompted significant campus
efforts to improve programs for freshmen and transfers in their first year,
increase financial aid, and improve programs for undergraduates in departments
and colleges. Persistence rates have increased since 1996 but nevertheless
remain the focus of serious campus attention.
In addition to campus-wide analyses, we also monitor persistence of various
subpopulations in order to improve programs. Examples include engineering
students (the college of engineering has its own enrollment management
team), athletes (used in NCAA certification), arts and sciences students
granted admission provisionally, transfers,
ethnic minorities, women, students with lower or higher predicted grade
point averages, and students in particular majors.
C, D, E. Placement rates, transfer rates, and after-graduation
Placement rates (including success both in further formal education and
in employment) are monitored through surveys of graduating students and
alumni. Our alumni are too dispersed geographically, and follow too diverse
paths through further education, to allow efficient use of state employment
records for this purpose. Placement rates are more important to professional
programs than to liberal arts programs, which are preparing students for
life-long learning and a myriad of careers.
Campus-wide information on placement rates comes from biennial
surveys of seniors and of bachelors recipients four years out, both
conducted regularly for almost ten years, and from a new survey
of all alumni one year after graduation. The alumni surveys also tap
after-graduation performance issues such as how alumni judge their preparation
in various skill areas.
Program-specific information also comes from graduating students and alumni.
Programs undergoing academic program review (every 7 years) are offered
assistance in surveying recent alumni (graduates for the last 7 years)
with questionnaires customized for the program; many take advantage of
this service. Some programs, such as law,
regularly survey alumni on their own, and the graduate school surveys students
After-graduation performance is also assessed through feedback
from employers via surveys and advisory boards.
CU-Boulder also queries non-returning students about their reasons for
leaving. While financial and personal/family reasons are high on the list,
we also collect some information on student dissatisfactions, to use in
improvement. We know that over 90% of non-returners transfer to other institutions.
F. Student satisfaction
CU-Boulder employs a regular cycle of student satisfaction surveys at
the campus, department, and course level. The results are used by the enrollment
management team, in the examination of student learning outcomes, by faculty,
by students themselves, and by units throughout the campus.
Our longest standing student survey is the faculty
course questionnaire (FCQ), through which students rate their courses
and instructors every term, in every course. We use over 10 years of FCQ
data to report annually to departments on individual instructors' performance
over time. FCQ data are used by students to select courses, by faculty
for course improvement, and by administrators for salary, promotion, rehire,
and tenure decisions. The FCQ is a true accountability system, ensuring
that student satisfaction with courses and teaching is a critical factor
in institutional management.
surveys, conducted regularly since 1990, tap student satisfaction with
both academic and nonacademic services. We use a system of incentives and
followups that yields a response rate of over 60%. We sample so as to allow
each school and college, plus our 22 largest majors (covering 70% of all
seniors), to be characterized independently.
Schools, colleges, and major programs receive a customized report in which
their results are compared over time and to other units. Reports include
transcribed student comments as well as quantitative displays. We review
this information personally with most units. Sample actions taken as a
- Anthropology: Put more emphasis on student clubs and career advising
- Accounting: Review how student comments on curriculum have changed over time
- Microbiology: Continue efforts to put advising information on the web
- Communication: Results reinforce the effectiveness of changes made in the past
Service units (e.g., registration, counseling) receive similar information.
The vice chancellor for student affairs has used this information to focus
The graduate school initiated a survey of graduating students in 1997;
results will be used by the school and by individual programs
survey, introduced in 1997, uses the ACT student opinion survey, thereby
allowing comparison of CU-Boulder results with those at a group of public
research universities nationwide. It too covers both academic and nonacademic
areas. This survey will also be repeated every two years. The 1997 response
rate was 56%.
Schools and colleges receive customized reports for discipline groups (e.g.,
social sciences) and by student class level. Our academic affairs division
is using these results to identify problem areas.
Service areas (e.g., career services) receive not only aggregate results
but transcribed comments and lists of students mentioning needed improvements
in a service, who have had indicated willingness to be contacted about
their responses. Some service areas did phone interviews or focus groups
with these students to get more detailed information for improvement.
CU-Boulder also regularly surveys undergraduates of color about the campus
climate for diversity (next scheduled for fall '98), and sophomores
and seniors about academic advising (next scheduled for 1998-99 or 99-00).
Individual units such as residence halls, financial aid, registrar, counseling,
and academic departments also collect student satisfaction data via surveys,
focus groups, and advisory boards. These efforts allow a level of detail
not attainable at the campus-wide level, and are critical for improvement
Entries G, H, and I are not on the CCHE-specified list of best practices,
but are equally important to assessment and accountability.
G. College, school, and department committees
All colleges, schools, and departments have faculty committees on undergraduate
education, graduate education, and/or curriculum. These committees meet
regularly to monitor course offerings and degree requirements and their
fit to goals for students. The committees then recommend adjustments in
curriculum, requirements, and/or teaching to ensure that degree recipients
meet knowledge and skill goals set by the unit. They also review the goals
themselves periodically for currency.
Recent examples of committee activity
The graduate school initiated surveys of graduating students in 1997 on
topics including instruction, advising, the advisor, financial support,
teaching and research assistantship, opportunities to present research,
ethics, and job status
Through custom items on student evaluations of courses, engineering monitors
the amount of work in design, writing, computing, and oral presentation
in every course taught in the college. This information is used in curriculum
The arts and sciences core curriculum committee surveyed graduating seniors
in fall 1997 about all core requirements, and examined extensive records
data, to plan modifications of core requirements.
Based on examination of student portfolios, the political science committee
recommended a department-wide discussion of the goals, content, examinations,
and procedures used in their senior-level critical thinking classes.
H. Academic program review
All academic programs on campus are reviewed every seven years in a process
involving a self study, campus review, and external review. The process,
called simply "program
review," is designed to identify program strengths and weaknesses,
and results in recommendations for program development and modification.
In the review process, degree-granting programs are expected to examine
the completion and placement rates, after-graduation performance, satisfaction,
and learning of their students. They are also expected to examine their
own outcomes assessment methods and use of results and recommend changes
Program review may result in significant changes in program direction,
such as development or discontinuation of degree programs or areas of emphasis,
changes in degree requirements, or major teaching initiatives.
I. Accreditation reviews
Every ten years the campus undergoes reaccreditation by the North Central
Association of Schools and Colleges (NCA). This involves a self study of
the entire campus and an extensive external review, both with emphasis
on how the campus assesses whether it is meeting its own goals. In addition,
many individual programs (e.g., engineering, athletics) are accredited
by national bodies governing programs in their areas.
Summary of state-requested indicators
L:\IR\CCHE\QIS98\CC8.HTM -- July 14, 1998
Written by Lou McClelland and Ephraim Schechter