Map A to Z Index Search CU Home University of Colorado
 
  Strategic Planning Institutional Research and Analysis Campus Budget and Finances About PBA

PBA Home > Institutional Research & Analysis > Outcomes Assessment > Unit Summaries > Department of Sociology

Department of Sociology
Last updated April 2003

Knowledge and skill goals for this undergraduate degree program are recorded in the most recent CU-Boulder catalog.

In some summaries of assessment activity, goals are referred to by number (e.g., K-2 is knowledge goal 2).

Assessment in 2002-03
Assessment in 2001-02
Assessment prior to 1998

Assessment in 2002-03

Department of Sociology Undergraduate Committee
Tom Mayer [Chair]
Jim Downton
Kris De Welde
Janet Jacobs
Michael Lynn
Fred Pampel
Len Pinto
Bob Regoli
Jules Wanderer

Report Summary

The Colorado state legislature requires that outcome evaluations be made for each undergraduate program at the University of Colorado. To satisfy this requirement the Department of Sociology Undergraduate Committee constructed a forty item multiple choice examination addressing eight fields of sociology: sociological theory, institutions, population, social stratification, gender, social psychology, crime and social control, and methodology. The exam was taken by 39 senior sociology majors in two Critical Thinking in Sociology classes. The average number of correct answers was 26.8 or 67%. The highest score was 34 (85% correct), while the lowest score was 20 (50% correct). Students had most difficulty with the methodology section of the exam where the mean score was less than 50% correct. The Undergraduate Committee plans to revise this exam and use it again for evaluating educational outcomes in each of the next few years. Item analysis helps identify which questions should be retained and which questions require revision in future versions of the exam. Factor analysis and cluster analysis both identify three distinct domains of undergraduate sociology knowledge: (1) institutions, social psychology, and crime and social control; (2) population and social stratification; and (3) methodology. Because this is the first time the test has been given it is difficult to assess the meaning of the results. The sociological knowledge of most senior sociology majors appears to be adequate, though hardly impressive.

1. Introduction

The Colorado legislature requires that each department at the University of Colorado evaluate the results of the undergraduate education that it provides. In the Department of Sociology the task of doing this fell upon the Undergraduate Committee. During the 2002-2003 academic year this Committee consisted of Kris De Welde (graduate student representative), Jim Downton, Janet Jacobs, Tom Mayer (chair), Fred Pampel, Len Pinto, Bob Regoli, Jules Wanderer, and Michael Lynn (undergraduate advisor).

The Undergraduate Committee considered various ways of evaluating undergraduate sociology education. The simplest procedure, the Committee agreed, was by assessing the sociological knowledge possessed by students enrolled in Critical Thinking in Sociology classes (Sociology 4461) most of whom are senior sociology majors. Consequently all our efforts have focused upon this population. We take this opportunity to thank the instructors of Sociology 4461 during the 2002-03 academic year (Martha Gimenez, Mike Haffey, Eleanor Hubbard, Leslie Irvine, and Adele Platter) for their splendid cooperation.

During the fall semester of 2002 the Undergraduate Committee tried an approach to evaluating educational outcomes based upon comprehension of current sociological literature. Conceptually challenging but nontechnical texts were selected from recent sociological journals, yearbooks, and monographs. The texts chosen were all relatively brief (twenty to thirty pages) and respectively addressed the specific sociological topics studied in the three Critical Thinking classes. A Committee member went to each Critical Thinking class and asked students therein to read the appropriate text. One week later the Committee member returned and gave the students a brief essay examination evaluating their comprehension of the text they had been given. These exams were subsequently read and assessed by members of the Undergraduate Committee.

This approach had several serious problems. Because the texts we distributed were not really required reading and had no bearing upon the course grade, a considerable number of students read them extremely superficially or not at all. Moreover, the three texts we chose were not of equal difficulty nor of equal interest to the students in the Critical Thinking classes. Nor was it feasible for all Undergraduate Committee members to read all the exams, and it proved difficult to standardize evaluations of the essay answers. Although it might have been possible to correct some of these problems, upon further consideration the Committee decided that an exam based upon reading current sociological literature would be unwieldy to administer, unreliable to grade, and unlikely to generate the succinct results required by the state legislature.

In light of this experience, the Undergraduate Committee decided to reverse directions and use a standardized multiple choice exam for evaluating the sociological knowledge of senior sociology majors. Our initial intent was to use the sociology examination prepared by the Princeton, New Jersey based Educational Testing Service (ETS). However, this examination is quite costly and requires approximately two hours to complete. Because it cannot be administered within an ordinary class session, inducing senior sociology majors to take the ETS exam would be difficult (as previous experience had demonstrated). Consequently we decided to construct our own multiple choice sociology examination, one that could be administered within a single class session and that reflected the curriculum offered at the University of Colorado. This report analyzes and interprets the results of the outcome evaluation exam that the Undergraduate Committee put together.

2. Constructing and Administering the Outcome Evaluation Examination

The Undergraduate Committee tried to make an exam that satisfied the following desiderata: (1) it should address the whole of undergraduate sociology; (2) taking it should require less than one class period; (3) all questions should be multiple choice and should have clear and unambiguous answers; (4) the correct answers should neither be obvious nor obscure; and (5) the questions on the exam should be evenly distributed over the various fields of undergraduate sociology.

To meet these desiderata, the Committee defined eight distinct areas of undergraduate sociology: theory, institutions, population, social stratification, gender, social psychology, crime and social control, and methodology. One Committee member took responsibility for each of these areas and prepared ten or more multiple choice questions concerning that substantive field. These were to be "mainstream" type questions, answers to which appear in any reputable textbook covering that field. The Committee also agreed that about forty questions would be the proper length, which implied approximately five questions for each of the eight designated sociological fields.

These sets of questions were submitted to Tom Mayer, the Committee chair, who selected five questions from each set. He also put the selected questions into a common format with four alternative answers, and edited them to achieve greater clarity and (what he hoped would be) reasonably uniform difficulty. This initial draft of the outcome evaluation examination was placed before the Undergraduate Committee at its meeting of Tuesday February 25, 2003. The Committee collectively and carefully reviewed all forty questions on the initial draft keeping in mind the desiderata listed above. Each Committee member, it was decided, should be able to answer all forty questions on the exam. Changes were required in about ten of the questions, and these changes were subsequently made by the chair.

This outcome evaluation exam was administered by the chair in two different Critical Thinking in Sociology classes on Thursday, March 13 (one at 9:30 - 10:45 am, the other at 11:00 am - 12:15 pm). The students easily completed the exams within thirty minutes (not including the time required for explanation), and they gave every indication of taking the test very seriously. Brief discussions conducted after the exam was completed supported its face validity: all students who spoke considered it to be a reasonable and somewhat challenging test of their sociological knowledge. We obtained 39 completed examinations from senior sociology majors, and each of these students answered all forty questions. The Undergraduate Committee had hoped to get fifty or more completed exams, but 39 are quite sufficient to allow meaningful analysis and interpretation.

Interpretation of the results is handicapped by our inability to pretest the exam or to give it to a suitable comparison group. Nevertheless the Department of Sociology, not to mention the Colorado state legislature, can extract valuable information from this outcome evaluation exam. Some of this information is presented below.

3. General Results

Without a suitable base of comparison, it is hard to know what constitutes a good performance on this examination. In speculations prior to giving the exam, the Undergraduate Committee hoped that more than half of the senior sociology majors taking it would get at least 70% of the answers correct. If this happened the median score should be 28 (70% of 40) or higher. We did not quite meet this objective. The median score was 27 and the average or mean score was slightly lower at 26.8. Some relevant statistics plus a histogram are given in Table 1.

The distribution of exam scores is approximately normal and has two consecutive modes (27 and 28). 43.6 % of the students taking the exam scored 28 or higher. The highest score was 34 or 85% correct while the lowest score was 20 or 50% correct. Half of the student scores were concentrated in the interquartile range between 24 (60% correct) and 29 (72.5% correct). While the lowest score of 50% must be considered a poor performance, it is still a lot better than mere guessing, which would yield about 25% correct. That the very highest score produced only 85% correct answers suggests that the exam was more difficult than the Undergraduate Committee anticipated. This interpretation is also supported by analysis of test scores within the eight sociological areas represented on the exam.

4. Fields of Sociology

What does the outcome evaluation exam reveal about the eight fields of undergraduate sociology addressed therein (i.e. theory, institutions, population, social stratification, gender, social psychology, crime and social control, and methodology)? Five questions addressed each of these sociological fields, and the basic information on how these questions were answered is presented in Table 2. It will be seen that the mean number of correct answers (out of a possible five) are: theory 3.7, institutions 3.2, population 3.2, social stratification 3.2, gender 4.5, social psychology 3.5, crime and social control 3.2, and methodology 2.4. Most of these means are fairly similar indicating that between 63% and 74% of the questions were answered correctly.

Two areas, however, stand out: gender and methodology. Scores on the five gender questions scores are distinctly higher than scores in any other area, with 91% of all gender questions being answered correctly. Indeed, the lowest score on the gender questions was four out of five. Such an outcome could happen because sociology majors are more knowledgeable about gender than any other field. It could also happen because the gender questions were easier than the questions about other areas of sociology. The latter interpretation is supported by the fact that no student got less than 80% of the gender questions right. That gender scores fail to differentiate much between better and worse performances on the overall exam (using t-tests, chi-square tests, and discriminant analysis) also supports this interpretation.

Scores on the five methodology questions, on the other hand, are distinctly lower than scores for the other seven areas of undergraduate sociology. Only 48% of the methodology questions were answered correctly, and none of the 39 senior sociology majors answered all five questions correctly. This could happen either because students are less knowledgeable about methodology than about other sociological subjects, or because the methodology questions are relatively more difficult. Although the capacity of methodology scores to predict performance on the overall exam is hardly impressive, they do predict overall performance more accurately than do scores on the gender questions.

A correlation matrix for the eight sociological field variables is given in Table 2. None of the 28 relevant correlations in this matrix is extremely large, suggesting that comprehension of these eight subjects have some degree of independence. The highest correlation (.415) is between scores on the institution questions and scores on the social psychology questions. The second highest correlation (.410) is between social psychology scores and scores on the crime and social control questions.

These issues were further investigated using a principal component factor analysis (see Table 2). This confirms that no single dimension of sociological knowledge can explain comprehension of all eight sub-fields. The largest factor (i.e. the principal component) explains slightly less than 25% of the total variance in the eight sub-field scores. The four largest factors together explain only slightly more than 70% of the total variance. Sociological knowledge is clearly multi-dimensional, and no fewer than four dimensions are needed to characterize the knowledge of undergraduate sociology majors.

How similar are the five questions used to measure knowledge about each of the eight fields of undergraduate sociology? To address this question a separate factor analyses was performed on each set of five questions. In none of these eight factor analyses did a single significant factor (i.e. a linear dimension) emerge. For all eight fields of sociological knowledge two or three factors were required to explain that part of the variation deemed to be non-random. In all eight cases the two principal factors (i.e. those explaining the most variance) accounted for only about 50% of the variation in how the five questions were answered (i.e. correct or incorrect). While this result may not be a problem, it does suggest that each of the eight subfields embraces more than a single linear dimension of sociological knowledge.

5. Evaluating the Questions

How informative are the forty questions included in this outcome evaluation exam? Do they provided valid and reliable information about the sociological knowledge of undergraduate sociology majors? These questions are important for understanding what the test results mean, and also when revising the exam for future use. Members of the Undergraduate Committee hope that, with some modification, the exam can be used to evaluate educational outcomes at least over the next few years. Without further information these queries cannot be fully answered. Nevertheless we can make some practical assessments of the forty questions contained on the current version of the exam.

Two simple criteria can be applied to a test question. Do answers to the question exhibit a reasonable degree of variation? All or most of the answers, that is, should not be identical neither all correct nor all incorrect. Second, are correct answers to the question associated with good performance on the overall examination? Tables 3 and 4 provide information pertinent to these two questions.

Table 3 indicates how many of the 39 senior majors answered a particular question correctly (sum column) and, equivalently, what proportion of answers to the question were correct (mean column). Note that on one question, question 21 (T21), all 39 answers were correct. It is difficult to know exactly how much variation is needed to make a question useful. This being so, it behooves us to avoid stringent variability requirements. As a rule of thumb, questions on which 95% or more of the answers are correct (or incorrect) are not very helpful for outcome assessment purposes. Four questions on the exam fall into this category: questions 21, 22, 23, and 24 all of which address the subject of gender. These questions should be revised in future versions of the outcome evaluation exam. Questions 2, 4, and 16 should also be examined because 90% of the answers on them were correct.

Every question was answered correctly by at over 20% of the respondents, but questions 1, 9, 27, 38, and 40 stand out because the correct answer was not the modal response. Questions 1 and 9 are vindicated because Table 4 shows each of them to be associated with appreciably better performance on the overall exam. This, however, is not true of questions 27, 38, and 40, which should be examined carefully for possible revision.

Table 4 emerges from a discriminant analysis of the forty questions. It indicates how well each question differentiates between students who scored above the median (27) on the overall exam and students who scored at the median or below. Stronger differentiation is indicated by higher F values (equivalent to a t-test in this two category correct or incorrect case) and lower significance levels. Table 4 is particularly helpful for identifying questions strongly associated with better overall performance. A sensible identification criteria is an F statistic of 4 or higher, which is equivalent to a significance level of .05 or less (recall that this is not a probability sample). Eight questions satisfy this criteria: questions 1, 9, 10, 13, 19, 20, 28, and 31. These questions should be retained in future versions of this examination.

One might reasonably doubt that a forty item multiple choice exam could effectively measure overall sociological knowledge. Forty items are not very many, to be sure, but discriminant analysis suggests that even a much shorter exam could be highly informative. It shows that only seven items, if properly chosen, could discriminate with 95% accuracy students who score above the median from those who do not. For example, an exam based exclusively upon questions 9, 10, 11, 13, 18, 19, and 37 would be remarkably instructive.

6. Domains of Undergraduate Sociological Knowledge

The Department of Sociology has devoted considerable time and energy to defining its areas of specialization. These areas are relevant mainly for graduate training and research. Indeed, one of the three principal areas of specialization, environmental sociology, is not even represented on the outcome evaluation exam. Nevertheless the outcome exam can identify coherent domains of sociological knowledge. It can do so by showing patterns of clustering and association among the eight sociological fields represented on the outcome exam. In other words, two fields would be placed in the same group if the ways in which senior sociology majors answered questions about these two groups resembled each other.

Two statistical methods are used to identify coherent domains of undergraduate sociological knowledge: orthogonal factor analysis (with varimax rotation) and hierarchical cluster analysis (using a Euclidian distance measure). The results of these methods are meaningful even if one is not familiar with the analytical technology. The analysis here is entirely based upon the eight variables summing the number of correct answers on questions about the eight fields of sociology.

First consider the implications of factor analysis. Principal components analysis identifies four significant factors for the eight variables considered. These results are rotated using the varimax procedure to obtain dimensions that have more substantive meaning. The rotated factor components are presented in Table 5. Pay special attention to the largest numbers in each of the four columns. These indicate suggested groupings of the eight sociological fields. Factor one suggests a grouping of institutions, social psychology, and crime and social control. Factor two suggests a grouping of population and social stratification, while factor three suggests that theory and gender might constitute a group or domain of sociological knowledge. Factor four suggests that methodology constitutes its own sociological domain. Factor analysis, it must be said, is not a robust procedure for small sample sizes, and thus the groupings listed above are merely possibilities.

The method of grouping or agglomeration used in cluster analysis differs sharply from that undergirding factor analysis. Thus it is helpful to compare the results of these two procedures. Figure 6 presents the results of cluster analysis. The clustering algorithm starts with eight separate variables and groups them in stages using a Euclidean distance clustering criterion. Both parts of Table 6 present essentially the same information in slightly different form. In the first stage the institutions variable is combined with the social psychology variable. In the second stage theory is combined with the above two forming a three variable cluster of institutions, social psychology, and theory. In the third stage population is grouped with social stratification. In the fourth crime and social control is aggregated with institutions, social, psychology, and theory. In the fifth stage an institutions, social psychology, theory, crime and social control, population, and social stratification cluster is formed. In the sixth stage methodology joins the group. And in the last stage gender is added making a complete eight variable cluster.

The dendrogram, which indicates the relative distance between these seven clustering points, suggests that a four cluster solution might adequately describe the domains of undergraduate sociology knowledge. According to this solution four such domains exist: (1) institutions, social psychology, theory, and crime and social control; (2) population and social stratification, (3) methodology, and (4) gender. The separation of gender knowledge may well reflect deficiencies in measurement. If theory were extracted from the four variable cluster and combined with gender this solution would be the same as that suggested by gender.

It seems reasonably clear that methodology constitutes a distinct domain of undergraduate sociological knowledge; that knowledge in the fields of population and social stratification hang together; and that institutions, social psychology, and crime and social control form a definite knowledge cluster. On the other hand, the relation of knowledge about gender and theoretical knowledge to the other six fields of sociology remains uncertain .

7. Conclusions

Judging by the results of this examination, most senior sociology majors at the University of Colorado know a good bit about sociology, which we must assume they learned from our undergraduate sociology program. 56% of our sample answered over two-thirds of the questions correctly. Methodology is apparently the most difficult field for undergraduate sociology majors, and if methodology questions were removed from the examination then the average respondent would have answered 70% of the questions correctly. On the other hand the questions about gender appear to be insufficiently challenging and this surely inflated the scores obtained by the students who took this exam. On the basis of these results we deem the sociological knowledge of senior majors to be adequate, but not truly impressive.

We hope to use a revised version of this examination in the next few years to assess the outcomes of undergraduate sociology education at CU. Item analysis helps identify those questions that need to be changed as well as questions that should definitely be retained in future editions of this exam. The section on gender requires fairly extensive revision to make it comparable in difficulty to the sections of the exam dealing with the other fields of sociology. The section on methodology may need alteration to render it less difficult.

The Undergraduate Committee specified eight distinct fields of undergraduate sociological knowledge and constructed the outcome evaluation examination on that basis. However, analysis of the results suggests that the actual sociological comprehension of CU senior sociology majors may be structured into three or possibly four domains. These are the apparent domains: (1) institutions, social psychology, and crime and social control; (2) population and social stratification; and (3) methodology. The relationship of theory and gender to these three domains is somewhat ambiguous, but may be clarified when a better measure of gender knowledge becomes available. The Department of Sociology may wish to consider these empirically defined domains of knowledge when making future changes in the undergraduate sociology curriculum.

Table 1: Information about total examination score

Total Score on Exam
N Valid 39
Mean 26.8462
Median 27.0000
Mode 27.00
Std. Deviation 3.51342
Range 14.00
Minimum 20.00
Maximum 34.00
Percentiles 25 24.0000
50 27.0000
75 29.0000

a Multiple modes exist. The smallest value is shown

Histogram of exam scores

Table 2: Information about eight fields of undergraduate study

Descriptive Statistics

  THEORY INSTITUT POPULATE STRAT GENDER SOCPSYCH CRIME METHODS
Mean 3.6923 3.1538 3.1795 3.1538 4.5385 3.5385 3.2308 2.3590
Median 4.0000 3.0000 3.0000 3.0000 5.0000 4.0000 3.0000 2.0000
Mode 4.00 3.00 3.00 3.00 5.00 4.00 4.00 2.00
Std. Deviation .76619 1.01407 1.12090 1.13644 .50504 1.04746 1.11122 .87320
Range 3.00 4.00 4.00 4.00 1.00 4.00 5.00 3.00
Minimum 2.00 1.00 1.00 1.00 4.00 1.00 .00 1.00
Maximum 5.00 5.00 5.00 5.00 5.00 5.00 5.00 4.00

a Multiple modes exist. The smallest value is shown

Correlation Matrix

  THEORY INSTITUT POPULATE STRAT GENDER SOCPSYCH CRIME METHODS
THEORY 1.000 .333 .066 .146 .031 .081 -.038 .012
INSTITUT .333 1.000 .068 .070 .091 .415 .295 .025
POPULATE .066 .068 1.000 .370 -.129 .184 .008 .121
STRAT .146 .070 .370 1.000 -.056 .039 .180 -.190
GENDER .031 .091 -.129 -.056 1.000 .034 -.274 .147
SOCPSYCH .081 .415 .184 .039 .034 1.000 .410 -.131
CRIME -.038 .295 .008 .180 -.274 .410 1.000 -.169
METHODS .012 .025 .121 -.190 .147 -.131 -.169 1.000

Principal Component Factor Analysis
Total Variance Explained

  Initial Eigenvalues Extraction Sums of Squared Loadings
Component Total % of Variance Cumulative % Total % of Variance Cumulative %
1 1.975 24.691 24.691 1.975 24.691 24.691
2 1.373 17.167 41.858 1.373 17.167 41.858
3 1.280 16.004 57.862 1.280 16.004 57.862
4 1.010 12.629 70.491 1.010 12.629 70.491
5 .900 11.244 81.735      
6 .659 8.238 89.973      
7 .469 5.867 95.839      
8 .333 4.161 100.000      

Table 3: Evaluating the outcome examination questions: means and sums
(i.e. total number of correct answers)

Variable Sum Mean
T01 10.00 .2564
T02 37.00 .9487
T03 34.00 .8718
T04 37.00 .9487
T05 26.00 .6667
T06 31.00 .7949
T07 33.00 .8462
T08 26.00 .6667
T09 8.00 .2051
T10 25.00 .6410
T11 29.00 .7436
T12 32.00 .8205
T13 29.00 .7436
T14 19.00 .4872
T15 15.00 .3846
T16 36.00 .9231
T17 27.00 .6923
T18 25.00 .6410
T19 19.00 .4872
T20 16.00 .4103
T21 39.00 1.0000
T22 38.00 .9744
T23 38.00 .9744
T24 38.00 .9744
T25 24.00 .6154
T26 32.00 .8205
T27 11.00 .2821
T28 32.00 .8205
T29 29.00 .7436
T30 34.00 .8718
T31 34.00 .8718
T32 27.00 .6923
T33 20.00 .5128
T34 19.00 .4872
T35 26.00 .6667
T36 29.00 .7436
T37 32.00 .8205
T38 8.00 .2051
T39 15.00 .3846
T40 8.00 .2051

Table 4: Evaluating the outcome examination questions: information provided

Tests of Equality of Group Means

Variable F statistic (1,37) Significance
T01 4.012 .053
T02 .033 .856
T03 .029 .867
T04 1.613 .212
T05 .199 .658
T06 .145 .706
T07 2.095 .156
T08 1.279 .265
T09 4.274 .046
T10 4.660 .037
T11 .984 .328
T12 .758 .390
T13 13.440 .001
T14 1.207 .279
T15 2.719 .108
T16 2.547 .119
T17 .025 .876
T18 2.004 .165
T19 6.424 .016
T20 4.165 .048
T21    
T22 1.304 .261
T23 .768 .386
T24 1.304 .261
T25 1.016 .320
T26 3.060 .089
T27 .723 .400
T28 7.526 .009
T29 .067 .797
T30 1.274 .266
T31 4.744 .036
T32 .717 .403
T33 .663 .421
T34 .032 .860
T35 1.279 .265
T36 .984 .328
T37 3.060 .089
T38 1.392 .246
T39 .089 .767
T40 1.443 .237

a Cannot be computed because this variable is a constant.

Table 5: Identifying domains of undergraduate sociological knowledge: rotated factor analysis

Rotated Factor Component Matrix

  1 2 3 4
THEORY 8.965E-02 .214 .775 -.140
INSTITUT .695 1.802E-02 .487 6.682E-02
POPULATE .111 .838 -7.576E-02 .322
STRAT -3.499E-04 .767 .150 -.334
GENDER -.113 -.287 .555 .313
SOCPSYCH .821 4.998E-02 5.328E-02 3.291E-02
CRIME .741 7.668E-02 -.297 -.285
METHODS -6.044E-02 1.816E-02 1.921E-02 .885

Extraction Method: Principal Component Analysis.
Rotation Method: Varimax with Kaiser Normalization.

Table 6: Identifying domains of undergraduate sociological knowledge: cluster analysis

Cluster Membership

Variable 7 Clusters 6 Clusters 5 Clusters 4 Clusters 3 Clusters 2 Clusters
THEORY 1 1 1 1 1 1
INSTITUT 2 1 1 1 1 1
POPULATE 3 2 2 2 1 1
STRAT 4 3 2 2 1 1
GENDER 5 4 3 3 2 2
SOCPSYCH 2 1 1 1 1 1
CRIME 6 5 4 1 1 1
METHODS 7 6 5 4 3 1

Dendrogram using Average Linkage (Between Groups)

Index of unit reports

l:\ir\outcomes\OA0203\soci.doc

Last revision 04/14/03


PBA Home  |   Strategic Planning  |   Institutional Research & Analysis |  Budget & Finances  | 
Questions? Comments? |  Legal & Trademarks |  Privacy
15 UCB, University of Colorado Boulder, Boulder, CO 80309-0015, (303)492-8631
  © Regents of the University of Colorado