Map A to Z Index Search CU Home University of Colorado
 
  Strategic Planning Institutional Research and Analysis Campus Budget and Finances About PBA

PBA Home > Institutional Research & Analysis > Outcomes Assessment > What We've Learned About Doing Outcomes Assessment

What We've Learned About Doing Outcomes Assessment


It takes a long time to develop an outcomes assessment process

It takes a long time to develop an outcomes assessment process at an institution as large and diverse as CU-Boulder. Planning began in 1985-86. Initial CCHE and campus policies were developed over the next year and were in place by 1987-88. CU-Boulder units developed goal statements and initial assessment plans in 1987-88 and 1988-89. Data collection and use began in 1989-90 and continue to evolve. As the section What's Next indicates, the evolution will continue.

It also takes time for outcomes assessment to be accepted as a normal part of university functioning. As was mentioned earlier in this report, the number of units taking the process seriously grew year by year. About 80% of the units have made serious efforts to monitor the assessment results and take account of them in curriculum discussions. Even within that 80%, the extent to which outcomes assessment has penetrated the units' internal processes and become a normal part of faculty members' activities varies considerably.

Differences between units must be acknowledged

The diversity of CU-Boulder units is sometimes a problem in an enterprise like this one, but it is more often a benefit.
  • Units are too different, and too independent, to attempt to impose common methods. Letting units experiment with methods and discover what provides useful information for them takes time and requires unit-specific discussions, feedback, and consultation. The individual unit summaries show great variation in how units gather and use outcomes information.

  • Units make the best use of approaches that fit their own, and their discipline's, style of gathering and using information.

  • Units learn from each other. One unit's experiences with a method can help another unit decide to try it, or to avoid it.

Some central oversight is necessary

The process would not have grown as well as it has without a certain amount of monitoring, prodding, cajoling, and constructive feedback. However, the benefits of acknowledging units' differences and giving them control of their own outcomes assessment processes would be lost if there were too much central control. Keeping a balance requires constant attention.
  • It is helpful if the coordinators have, or have access to people with, skills in data gathering and interpretation. Units may want or need help with planning assessment methods, gathering information, and making effective use of it. This can be particularly true for units using surveys or student records data.

  • It is important to adapt assistance and feedback to the unit's own research styles and skills. Faculty in some units are very experienced in designing surveys and using survey data, for example. Those in other units may not be, but may still find survey data useful. Some are quite skilled at holistic scoring of essays. Others are not familiar with these techniques. And so on. It is not appropriate, or useful, to treat units with different sets of skills the same way.

  • Deadlines are useful to make sure that things get done. So is reinforcing the annual reporting by making outcomes assessment part of the formal program review every seven years. Moving to a two-year reporting cycle in 1998 was clearly an error. Even though units were directed to DO assessments annually, some eliminated longstanding annual activities. In addition, the two-year cycle has been confusing for units and difficult to manage.

  • In a "natural experiment" resulting from changes in personnel, in which the PBA outcomes assessment coordinator role was essentially unfilled for several months, we established definitively that many units will comply with reporting requirements only if nagged frequently.

Things that aren't the responsibility of a single unit require special handling

Assessing general education outcomes is an example. The approach to developing students' critical thinking skills, for example, varies from unit to unit. What a unit needs to know in order to evaluate its contribution to general education, and how it can use that information to improve its process, depends at least partly on the teaching style of its faculty and within its discipline. Allowing units to control their own outcomes assessment processes makes it difficult to have a coordinated approach to something like general education that cuts across unit boundaries--but it makes it more likely that units will be able to use the information they obtain to improve their undergraduate education.

Not all assessment is reported as such

Units may engage in extensive assessment activities, and use the results for curricular change, but not report these activities under the heading "assessment." For example, our College of Engineering has reported little of their work for ABET reaccreditation.

CU-Boulder's approach is not the only one

The literature on undergraduate outcomes assessment describes institutional programs of many types and with a variety of focuses. Our choices have been affected by CCHE's reporting requirements and by our early and fundamental decision that units should have primary control over their own outcomes assessment processes.

[Next]

Last revision 06/02/04


PBA Home  |   Strategic Planning  |   Institutional Research & Analysis |  Budget & Finances  | 
Questions? Comments? |  Legal & Trademarks |  Privacy
15 UCB, University of Colorado Boulder, Boulder, CO 80309-0015, (303)492-8631
  © Regents of the University of Colorado