How we decide what to test

All student-oriented information and communication technology (ICT) systems to be launched by the University of Colorado Boulder must be tested for accessibility.

Accessibility testing can be done on campus through the Accessibility and Usability Lab. It can also be done off campus by a vendor/supplier or a third-party consultant. All third-party testing must follow the standards approved by Dan Jones, CDAO (Chief Digital Accessibility Officer) and ICTARB (ICT Accessibility Review Board).

When choosing between on-campus and off-campus testing, it is useful to understand how AUL prioritizes testing requests. If your service is among the high-priority services for the lab, it is likely to be tested quickly; if it is a low priority, there can be a wait of several weeks or months or the testing may be completely out of the lab’s scope and the service owner should engage a third party consultant to do the testing of accessibility (if you are uncertain where to find a consultant, we can provide some direction).

Services that get a high priority in the testing queue include:

  • Services that are listed in of the Department of Justice inquiry

  • Services that are labeled “critical” in the OIT service catalog

  • Services that affect the broadest user audiences (e.g. all students vs. a small group of  students; academic versus administrative or research services)

  • New service launches (no new OIT service or campuswide ICT service should be launched without prior accessibility testing)

  • Time-sensitive launches/emergencies (an emergency as determined by the Chief Digital Accessibility Officer)

How does a typical test go

Most in-depth tests at AUL have a one month turnaround, but depending on the lab load and availability of testers, testing can take longer. The timelines are set by the lab staff in collaboration with the client (the service owner) early on in the process. All testing follows a standard workflow - let us look at the three stages of what happens here before, during, and after testing:

  • Before testing
    • decide if a project should be tested by the lab, based on our testing priorities
    • make sure that the project is 100% or almost 100% finished, and that all major functionality has been developed.
    • ensure that there is a stable test environment to which all of our testers have been granted access
    • establish a timeline for the project with the client
    • choose the hardware platforms for the test
    • set communication expectations with the client (frequency, depth, and preferred channels)
    • come up with a list of actions (can be very short for simple apps, or long for complex ones) that the users are expected to perform with the site / app that is being tested
    • write a testing script, asking the client for clarifications when necessary (here's a sample script to give you an idea what to expect - accessible to anyone with a CU Boulder account on Google Drive)
  • During testing
    • conduct supervised standardized testing with a sighted observer present, following a strict testing script
    • have a client representative on call to address unexpected problems (access denied, data reset, login expired, etc).
    • if needed, update the client on how the testing is going and share early impressions of the project's accessibility
    • compile the test report and group problems by severity, starting with the blocking issues that render the application inaccessible, and ending with usability problems
    • relate all accessibility problems to the sections and subsections of the WCAG 2.0 standard
  • After testing
    • deliver the final report to the client as a PDF or a Google Doc (here's a short sample report to give you an idea what to expect - accessible to anyone with a CU Boulder account on Google Drive)
    • optionally, offer video recordings and live demos to showcase the issues. We strongly recommend live demos, especially for service owners and developers with limited accessibility knowledge
    • make recommendations on how to improve the usability of problematic elements (at the UX level rather than the coding level)
    • provide additional testing after the developer attempts to correct the reported issues
    • fill out our feedback form to tell us how we did and how we can improve

It is also important to note that there are a few things that AUL will not be able to do:

  • interact directly with non-CU entities, vendors or developers. All communication with external audiences is best done by the service owner.
  • reveal the identity of testers
  • make decisions to launch or stop a service - we can only provide recommendations and evaluations about what we believe to be the best course of action
  • correct the application's source code to remediate the encountered issues.