

PBA Home > Institutional Research & Analysis > Assessing the size or importance of a difference Assessing the size or importance of a differenceIn all kinds of analysis, we compare two or more results. E.g.:
We usually want to know – Is there a difference? And if so, is it a “big” difference, or important, or worth noting? The answer depends on three different tests for a single comparison! Test 1: Statistical significance. Is the difference reliable? Or could the difference we’re seeing likely be due to chance? We use standard statistical tests for this. Reliability is strongly affected by the number of cases – for the same difference in averages, the more cases, the more reliable and the more likely the difference is “statistically significant.”
Test 2: Effect size. Is the difference noticeable, given the distributions of responses or scores? Effect size is an alternative to standard statistical tests, and expresses the difference in terms of standard deviation units. Click here for technical details.
Test 3: Importance. Is the difference important? A difference may be reliable, and noticeable, but of no importance – e.g., a height difference of 0.2 inches between two populations. There is no statistical test for importance – you have to decide, based on the context.
PBA: L:\mgt\IR\StatRulesOfThumb.doc 
Last revision 10/26/06 PBA Home  Strategic Planning  Institutional Research & Analysis  Budget & Finances  Questions? Comments?  Legal & Trademarks  Privacy 15 UCB, University of Colorado Boulder, Boulder, CO 803090015, (303)4928631 © Regents of the University of Colorado 