Map A to Z Index Search CU Home University of Colorado
 
  Strategic Planning Institutional Research and Analysis Campus Budget and Finances About PBA

PBA Home > Institutional Research & Analysis > Outcomes Assessment > Unit Summaries > Program for Writing and Rhetoric > Activities 2001 and before

University Writing Program
Assessment Activities 2001 and Before

The most recent CU-Boulder catalog states that students in the College of Arts and Sciences must take at least one lower-division and one upper-division course approved by the College as emphasizing fundamental writing skills and analytic and persuasive writing. College of Business and College of Engineering students must complete at least one upper-division writing course. Many students meet this requirement with courses in the University Writing Program (UWRP).

The University Writing Program has these knowledge goals for students:

  1. For beginning students, knowledge of a simple essay form incorporating occasion, thesis, counterthesis (when appropriate), and projected organization within the first paragraph, plus several paragraphs of development and a brief conclusion.
  2. For students in junior-level courses, knowledge of a more sophisticated analytic essay form that develops an interpretative thesis out of complex technical information, and knowledge of a more difficult argumentative form that not only qualifies a problematic thesis and coordinates the parts of a complicated proof, but also refutes the arguments of an opponent.
  3. For students working towards an "emphasis in writing," knowledge of the major prose stylists in English, and experience in using a variety of styles.
 

Skills: In addition, students participating in the program are expected to acquire the following skills:

  1. Ability to define a manageable topic and a provable, original thesis within given limits of time, research materials, and the writer's own knowledge.
  2. Ability to shape an essay: to impose a clear, coherent form on a mass of facts, impressions, and ideas. In particular, ability to argue from, rather than toward, the thesis.
  3. Ability to understand what proofs a given thesis requires. In particular, ability to discriminate between description and analysis, between repetition and development, and between relevant evidence and irrelevant detail.
  4. Ability to arrange proofs in a logical sequence with clear transitions.
  5. Ability to shape a clear, justifiable, and provocative conclusion.
  6. For students in junior-level courses, ability to tailor written materials for oral presentation, and ability to speak clearly and convincingly before an audience.
  7. For students working towards an "emphasis in writing," ability to vary tone and vocabulary to suit different audiences, and to use emotional as well as rational persuasion.
  8. Ability to accept and profit from criticism, of substance and logic as well as style and mechanics, in revising preliminary drafts into finished work.
  9. Ability to offer useful criticisms to other writers.
 

UWRP Assessment, 1989-1995

Historically, the UWRP has assessed students' ability to achieve knowledge and skills goals that focus on understanding essay form, defining a thesis, and shaping a coherent, thoughtful, logical and clearly written essay. From 1989 through 1995, the UWRP used the same general outcomes assessment procedure, modifying it from time to time to fit changes in the program's primary courses. UWRP instructors and outside experts rated a sample of essays on a four-point scale. Focus was on required lower-division courses in 1989-1992, at which time the upper-division requirement replaced the lower-division one. Consequently, the focus of the assessment changed to upper-division courses from 1992 to 1994. Results across these years were consistent, with a majority of students showing improvement on at least one essay feature. Upper division students appeared to critique essays better and to focus and revise their work better than did lower-division students.

In 1994-1995, the freshman-level writing requirement was re-established in addition to the upper-division requirement. In response, the UWRP pilot-tested a redesigned assessment process for its lower-division courses. The new process examined students' understanding of the skills being taught by asking them to critique a representative student paper. Instructors' critiques of this paper were used to develop scoring scales for the students' ability to recognize the paper's strengths and weaknesses and to suggest improvements. Most did recognize that the sample paper was severely flawed and were able to articulate some of its problems clearly.

UWRP Assessment, 1995-1996

In 1995-96, an administrative request for us to devise an exemption exam prompted the UWRP to change its upper-division procedure to evaluate in-class pre-test/post-test writing along with a paper written during the course of class revision. The scores showed a clear improvement between pre-test and post-test scores, providing evidence that, although the Writing Program courses focus on revision over several weeks, there is positive transfer of skills when students perform in-class writing. Further, papers written during the semester through class workshop and revision were, on average, better by 3/4 of a point on the four-point scale than essays written in class at the beginning of the semester. The results showed a movement from scores that represent surface-level development to scores that represent idea development through critical thinking.

We also compared the pre-test essays that earned exemptions with a random sample of post-test essays from students who did take the course. While the exempted students did perform better at the beginning of the semester than the randomly chosen students performed at semester's end, the differences between the two groups' scores were small. It seems likely, then, that a majority of the exempted students would have improved their writing skills had they been required to take the class.

The UWRP also continued to assess students in the freshman-level course, using a procedure similar to the one piloted in 1994-95. Results of this assessment were hard to analyze, in part because there was no basis for comparison and in part because neither the assignment nor the scoring rubric worked as well as anticipated. The assignment required students to deal with layers of reading material, and many of them seemed to have difficulty marshaling these different layers. We noted that this problem, in itself, demanded our attention to emphasizing critical reading strategies as we continued to refine lower-division course design.

UWRP Assessment, 1996-1997

The 1996-97 assessment again analyzed both junior- and freshman-level students' work. The junior-level assessment asked students to write, at the beginning of the semester,

a take-home essay in response to a prompt. Students were encouraged to revise this essay often. A sample of those essays was compared to essays by the same students written later in the semester and revised through the class workshop. As in previous years, the "before and after" essays were scored by both internal and external reviewers. On average, students' scores improved by almost 1/2 point on the four-point scale. Even though this increase was less than the 3/4 point reported for 1995-96, it was perhaps more impressive because the starting scores were markedly higher. Perhaps more telling than statistical demonstrations of improvement were the following comments from two outside scorers:

The revised essays "demonstrate a higher level of topic interest and thinking [than the beginning-of-semester papers]...Whatever changes you've made in the program seem to be generating a far more interesting caliber of essay."

"I was struck by how much stronger the papers in the second set were, and by how many of the responses [to the beginning-of-semester prompt] relied ... upon description more than analysis..."

The freshman-level courses were evaluated in two different formats during 1996-97 to address concerns about the process used in the previous two years. One format asked students to write a critique of a student paper (as had the previous assessment), but the critiques were scored (by an in-house reader) according to a holistic rubric. A second format asked students to write, during the final exam period, a response to a prompt based on a legal case study. Students assessed under this format had written papers from similar prompts during the semester. These essays were scored by an in-house reader according to a rubric based on the one used for upper-division assessment.

Comparing the results of these two formats suggested that lower-division students have trouble coping with the organizational demands of a written critique because they've not practiced this form. The case-study assignment seemed to provide students the challenge of grappling with analysis, but in a format more familiar to them in the course. Therefore, future assessment should continue to look at the pedagogical advantages of this approach.

 

UWRP Assessment, 1997-1999

During academic years 1997/98 and 1998/99, the UWRP outcomes assessment strategy again changed dramatically to respond to findings in the field of composition studies. The UWRP assessed two upper-division courses during this reporting period: UWRP 3020, which meets the Arts and Sciences core requirement and for which the Writing Program offers 50 classes per semester on average; and UWRP 3030, which meets the College of Engineering core requirement (and the A&S core requirement for several "hard" science majors) for which the Writing Program offers approximately nine classes per semester.

Having used a pre-test/post-test format for the last several years, the UWRP has demonstrated that its courses teach students to improve a single paper through the revision process. But this assessment strategy does not assess students' work throughout the semester, or their work on the different kinds of writing the UWRP requires upper-division students to complete. Consequently, this latest assessment collected final copies of the major papers that 233 students chosen at random completed for their UWRP 3020 classes and that 13 students chosen at random completed for their UWRP 3030 courses. (The Writing Program assessed UWRP 3030 students for the first time this year, and so used a small sample size as a pilot.)

UWRP 3020 Assessment

While teachers in the UWRP do not use a portfolio assessment method, the goal of this assessment technique was to collect, essentially, a portfolio of polished work from each of the students chosen for the sample. Unlike previous years' results, then, which specifically aimed to compare an early draft with a final draft of a single paper, this year's UWRP 3020 results do not focus on improvement over the course of a paper or the semester. Instead, because students in this course typically work on different kinds of papers over the course of a semester, beginning with summary and moving on to analysis and argument, the assessment aims to demonstrate students' competence in each of the major kinds of writing they compose for the course.

Each "portfolio" was scored by an external or internal scorer (other than the student's instructor) in each of 15 categories, according to a rubric developed in response to an upper-division outcomes statement developed by UWRP instructors during the Spring 1999 semester. The UWRP's knowledge and skills goals are embedded within this outcomes statement, which follows:

OUTCOMES STATEMENT FOR UPPER DIVISION WRITING COURSES

RHETORICAL CONSIDERATIONS

Students should be able to

  • Demonstrate an understanding of analysis and argument in light of the rhetorical situation provided by purpose and audience
  • Formulate an issue and determine the appropriate mode through which to address it
  • Craft an analytical or argumentative thesis and evidence appropriate for its defense
  • Represent countervoices fully and fairly as appropriate
  • Answer counterarguments directly
  • Use a range of strategies to address various audiences in different rhetorical situations
  • Use appropriate tone, style, diction
  • Provide sufficient guidance to a reader
  • Transfer strategies from one rhetorical situation to another

DEVELOPMENT

Students should be able to

  • Structure a paper to support a thesis
  • Understand the difference between supportable and unsupportable opinion
  • Make and defend inferences about information
  • Organize logically
  • Develop ideas rather than merely assert them
  • Choose clear, precise words and constructions
  • Make and clarify sophisticated relationships among ideas
  • Establish a personal voice appropriate to the paper's context
  • Submit a final product that is reasonably free of errors in grammar, punctuation, and proofreading.

PROCESS

Students should

  • Understand that writing is an ongoing process informed by critical dialogue
  • Develop skill to critique peers' papers
  • See the critical analysis of others' work as relevant to their own
  • Emulate strong models

-----------------------------------------------------------------------------------------

The rubric asked scorers to mark students' performance in each category on a scale from 1 (excellent) to 4 (poor). Categories were:

  • Addresses an issue through an appropriate mode (analysis or argument)
  • Chooses a specific genre (essay, report, memo, etc.) appropriate to purpose
  • Crafts an analytical or argumentative thesis
  • Structures paper to support thesis
  • Makes and defends inferences
  • Develops ideas rather than merely asserts them
  • Represents countervoices fully and fairly, as appropriate
  • Directly answers counterarguments
  • Organizes logically
  • Uses clear, precise words and constructions
  • Makes and clarifies sophisticated relationships among ideas
  • Uses appropriate tone, style, and diction
  • Provides sufficient guidance to the reader
  • Establishes a personal voice appropriate to the paper's context
  • Is reasonably free of errors in grammar, punctuation, and proofreading.

Scores were quite positive, with an overall average student score of 2.33. Indeed, students scored above the average (2.5) in two-thirds of the categories: mode, genre, thesis, structure, organization, precision, tone/style, reader guidance, voice, and grammar. Several scorers remarked that the papers were stronger than in previous years.

Despite the strength of these scores, students' scores in more complex areas demand focused attention from UWRP teachers. Students scored well below average (3.22) in addressing countervoices/counterarguments. This score may have been caused, in part, by scorers' different approaches to scoring in this category. Some scorers marked this category "NA" for papers they believed weren't argumentative in nature, and others gave a poor score for these same papers. Undoubtedly, then, the score is skewed and the results indicate that the next assessment method will need to clarify how such papers are to be scored. Additionally, students generally are not introduced to argument until after they have worked on analytic papers, and so they have less time to develop skills in this area. In any case, both previous years' assessments and teacher comments identify this category as one with which students struggle. Scores in other critically important areas--defending inferences, idea development, and clarity of relationships between ideas--hover just under average. While UWRP teachers and previous years' pre-test/post-test assessments attest that these scores demonstrate student improvement over the course of the semester, they nevertheless demonstrate that UWRP faculty need to address these issues.

Another focus of this year's assessment was to evaluate the necessity of using external scorers. Many experts in composition theory argue that the most useful assessment a writing program can conduct is internal: one during which classroom teachers score papers and, more importantly, discuss their strengths and weaknesses, and suggest methods for addressing problems. In light of this argument, about ten percent of the paper samples went to both an internal and an external scorer so that we could check for bias. While the scores on individual papers occasionally differ dramatically, there is no pattern demonstrating that external scorers' assessment is more or less stringent than that of internal scorers. The average score difference was about one-half point, but there were roughly equal numbers of papers scored better and worse by outside scorers. Consequently, the Writing Program will consider a change to a method that relies more heavily on internal scorers. Current internal scorers assess papers as part of their service obligations, and, as the number of UWRP contractual instructors increases, we can establish a committee for this purpose. Assessment funds currently spent on outside scorers' stipends could more usefully be used to invite an expert compositionist to help the UWRP develop specific assessment strategies within this new approach.

UWRP 3030 Assessment

For the first time since the University Writing Program (UWRP) has been conducting outcomes assessment, UWRP 3030: Writing on Science and Society was included in the assessment study. Until this point, assessment had focused on courses developed for Arts and Sciences students--UWRP 3020 and UWRP 1150/1250--because these courses represent a large majority of our course offerings. This year, however, the College of Engineering's review for accreditation from the American Board of Engineering and Technology coincided with our plan to begin assessing all of our courses. Consequently, this year's look at UWRP 3030 courses uses a small pilot study to provide an assessment baseline and to raise questions about how best to assess this course.

Specifically, the study was designed to review two student portfolios from each of the eight sections of UWRP 3030 offered. Student portfolios were not assembled by students, but consisted of final drafts of major papers (defined as those papers having gone through class workshop) submitted by instructors. The sample size represents a bit more than 10 percent of the UWRP 3030 population--not enough to be statistically significant, but enough to demonstrate some general trends. Further, the small size allows a close look at individual students' work throughout the semester, which may prove helpful for future assessment of all UWRP courses.

Papers were scored according to the same rubric devised for scoring UWRP 3020 papers. Scorers were in-house instructors both of whom have taught UWRP 3030 but did not teach it during the semester evaluated. This rubric, also new to UWRP 3020 outcomes assessment this year, was derived from an outcomes statement of upper-division course goals (attached) that was developed by UWRP faculty over the Spring 1999 semester. The rubric asked scorers to rate each paper on a scale of 1 (excellent) to 4 (poor) on each of 15 outcomes.

Scores on each of the 15 outcomes reveal that students' strengths--with average scores of 2.2 or better--are in the first two categories: choosing appropriate mode and genre. Weaknesses lie in representing countervoices (average score 3.37) and directly addressing counterarguments (average score 3.45). These results come as no surprise. From the beginning of the semester, UWRP courses stress the definition of analysis and argument and ask students to consider the relationship between their purpose for any given writing task and the genre and audience that purpose demands. Assignments often guide students in these areas as well. In short, students have this consideration in front of them throughout the semester. Generally, however, the course moves from analytical assignments toward argumentative ones, and thus students are faced with the challenge of directly attending to counterarguments later in the course, and have less time (and, perhaps, less reason, depending on the nature of assignments) to use these argumentative strategies. In fact, scores in these two categories may be influenced by assignments. Because they have no specific instructions for these categories, scorers will mark a "4" for papers that require no direct counterargument rather than leave the category blank .

Interestingly enough, a comparison of scores on first papers versus later papers reveals a seemingly contradictory trend. For the very outcomes on which students perform best overall--mode and genre--scores reveal a slight decline over time rather than improvement. A review of instructors' assignments shows a trend toward increasingly difficult tasks as well as toward making students responsible for formulating their own writing tasks. Consequently, mode and genre are often prescribed at the beginning of the course, and left up to the student to determine for the final project. For example, instructors in several sections of the course assign first and second papers that direct students clearly toward an essay that evaluates a technical article or a response to an editorial in the form of a letter to the editor. Such assignments clearly indicate mode and genre for the student. On the other hand, students are asked, without exception, to choose their own topics and approaches for individual or group final projects. As students juggle considerations of the many outcomes expected of them by the end of the course, it is natural that considerations that had been "filled in" for them as part of early assignments are less polished when they are making such decisions without the help of an assignment sheet.

On the other hand, scores for students' ability to represent countervoices and to address counterarguments directly show improvement over the series of papers, despite these categories' low overall scores. Again, course design is probably largely responsible for this trend. As the semester progresses, students are asked to write more and more sophisticated kinds of papers and are taught how to incorporate argumentative strategies. Improved scores in these outcomes categories is a positive sign that students do develop strategies by which to more fully develop analytical and argumentative writing projects.

Other outcomes categories that demonstrate improvement over the semester include use of clear and precise words and constructions; use of appropriate tone, style and diction; use of appropriate personal voice; and surface-level correctness in grammar, punctuation, and proofreading. Improvement in these categories indicate that students' class experiences requiring a variety of writing tasks and significant revision have a positive effect on their work; these sorts of outcomes are widely accepted to be learned in the context of realistic writing tasks and to improve with experience.

While students' average scores provide only verification of very general and expected information, individual students' scores sometimes did show marked improvement. As would be expected from a random population sample, these students were scattered throughout the sections represented. One exception was that neither student in the section requiring four assignments demonstrated improvement in more than a couple of categories. In fact, their scores more often showed decline that those of most of the other students in the sample. Certainly, it's possible--perhaps even likely--that this result is merely an anomaly. It's also possible that doing four papers rather than three somehow affects students' scores. But a comparison of the assignments in this section with assignments in other sections reveals another possibility. In this section, the first three assignments gave students very definite parameters. The first asked students to "argue whether the person in [a particular] ethics case is guilty or not;" the second asked them "to choose an editorial you disagree with and respond;" and the third asked them to "argue whether the Internet is a community or fosters community" as defined in a particular article. The fourth, however, asked students to "pick an ongoing debate and argue for one side." This assignment gives students the responsibility to choose and design their own work, and it gives this responsibility rather abruptly. It's possible that a decline in scores from first to last paper results from students floundering as they try to establish those parameters for themselves.

Other instructors' assignment progressions give students more freedom from the outset or ask students to analyze not a position but a writer's strategies for making an argument. While these assignments may well define mode and genre, they seem to more often give students the responsibility for choosing essays and articles to respond to. Questions that the UWRP should consider, then, include: How many major assignments allow students to benefit most from the focus of the course? To what extent should assignments give writers well-defined parameters and to what extent should they ask students to explore a topic and set up those parameters for themselves? Is it helpful to spend focused workshop time analyzing a particular published writer's strategies in a piece, or more helpful to analyze a variety of strategies within the context of students' own writing?

The difficulty in analyzing this set of data also has implications for future assessments. The rubric--and the outcomes statement from which it is derived--certainly point to important considerations in evaluating student writing and reflect the concerns of the composition and rhetoric community. (A national first-year composition outcomes statement has been developed over the last few years and will be published in the fall issue of the Writing Program Administrators Journal.) No doubt the UWRP statement will be improved as UWRP faculty revisit it over the course of the next few semesters. But as the statement is revised to incorporate all the outcomes that good writing reflects, it will necessarily become more detailed and more unwieldy. The current scoring system, which asks scorers to score the component pieces of a paper rather than its overall success, will no longer be feasible. Such a statement would better lend itself to a holistic scoring system than to the current one. By holding scoring sessions staffed by its own UWRP 3030 instructors, the UWRP could assess student progress holistically without sacrificing attention to the various outcomes. Such a system would open a dialogue about those outcomes students reach and those they struggle with, and could in turn positively influence curriculum and course design.

UWRP 1150/1250 Assessment

Because most students are exempted from their lower-division writing requirement on the basis of standardized test scores, the UWRP did not assess these courses during the reporting cycle. However, the UWRP's external review, completed during Spring 1999, clearly recommended that all freshman students complete this requirement. To this end, the UWRP is collecting data--all class assignments and sample papers from all lower-division instructors--that will be discussed during the Spring 2000 semester. From this data, the UWRP hopes to develop a lower-division outcomes statement and recommendations for standardizing UWRP 1150/1250 curriculum goals.

Return to Program for Writing and Rhetoric index

l:\ir\Outcomes\OA9899\uwrp99.doc

Last revision 11/15/02


PBA Home  |   Strategic Planning  |   Institutional Research & Analysis |  Budget & Finances  | 
Questions? Comments? |  Legal & Trademarks |  Privacy
15 UCB, University of Colorado Boulder, Boulder, CO 80309-0015, (303)492-8631
  © Regents of the University of Colorado