How Useful are Accessibility Evaluation Tools?

Posted September 1st, 2006 by Karl Dawson

Accessibility has as much to do with usability than being purely technically correct. The site needs to have clear navigation, the ability to skip content areas, offer alternative layouts and be written in an easily understood style by the anticipated audience. Can an expensive evaluation tool be justified and are site-wide checks using such a tool actually required (rather than just for a “feel good” factor of control over the situation) post-production?

Scope

To assess and discuss the benefits and limitations of using an automated evaluation tool to assess the technical accessibility of a standards-compliant website.

I’ve broken this research into several areas:

The Usefulness of Automated Tools

Perhaps one of the quickest ways to get a feel for the accessibility of a website is to run it through an automated evaluation tool. There are many such products available all with their strengths and weaknesses. Some are free, such as TAW 3 and some are expensive. Typically, once I’ve completed a web document I will use Chris Pederick’s Web Developer Toolbar for Firefox by selecting the tools option and fire off the “Validate CSS,” “Validate HTML” and “Validate WAI” options. I also do this when checking submissions to Accessites against the base Criteria. Any problems and I stop. If the tools report okay, then I carry on checking the integrity of the website without CSS and/or images and go through the source code making sure those “for” attributes on your labels match up to the correct input “id” for example.

Some tools evaluate a single page (such as the “Validate WAI” option above) whilst others like TAW 3 might crawl through the entire site. I really like TAW 3 and recommend it to content authors. The test configuration can be saved — so for example I can set this up during user training and all the user then need do is press a button to start the assessment. Where this product wins for me though is that it helps to educate the user by highlighting which checkpoints require manual checking. Due diligence is essential.

With all that said though — these tools can only test in ones and zeroes, black and white, yes or no. Many of the guidelines need to be reviewed in context to their use and that can only be done by a trained human.

Limitations of Automated Tools

There are 65 guidelines in WCAG 1.0, (16 priority 1 checkpoints, 30 priority 2s and 19 priority 3s).
Automated tools can wholly test the following checkpoints

Priority 2
3.2 - Create documents that validate to published formal grammars.
11.2 - Avoid deprecated features of W3C technologies.
Priority 3
4.3 - Identify the primary natural language of a document.

A number of the online parsers tend to stop checking for checkpoint 11.2 as soon as they hit the first failure. So, if you have, for example, “align="right"” associated with a div high up in the markup and a border attribute associated with an img element lower down, only the align will be highlighted as a failure. The document will require a second pass through the parser once the first issue has been corrected before the second failure will be identified. If you’re using a transitional DOCTYPE it is possible to pass validation yet still fail 11.2 by using deprecated markup - yet another reason to use a Strict DOCTYPE.

Automated tools can partially test the following checkpoints

Priority 1
1.1 - Provide a text equivalent for every non-text element (e.g., via “alt,” “longdesc,” or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video.
6.3 - Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page.
9.1 - Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape.
Priority 2
3.4 - Use relative rather than absolute units in markup language attribute values and style sheet property values.
6.4 - For scripts and applets, ensure that event handlers are input device-independent.
7.4 - Until user agents provide the ability to stop the refresh, do not create periodically auto-refreshing pages.
7.5 - Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects.
9.3 - For scripts, specify logical event handlers rather than device-dependent event handlers.
12.4 - Associate labels explicitly with their controls. Sure they can detect the presence of the for and id attributes in the label and input tags but it will take a human to check you’ve associated the right labels correctly.
13.1 - Clearly identify the target of each link.

Of these programmatic tests the following checkpoints fall into a web content author’s space: 1.1, 3.2 and 11.2. The remaining checkpoints are applicable to web developers only and fall into three main categories:

  1. The templates.
  2. The cascading style sheets.
  3. The functionality and interaction of the website (JavaScript, PHP. ASP, image maps etc).

Quality control at the bench

Checkpoints 1.1 and 3.2 can be legislated against with user training and a properly configured, standards-compliant text editor. Additionally, configuration of the text editor can disallow the use of deprecated elements (font, underline, marquee, etc) and so satisfy checkpoint 11.2. The final check before publishing the page to the live server should more likely be a quick trip to the W3C markup validator and thus neatly sidestep all but one of the checkpoints an accessibility tool can wholly test for… someone remind me why our non-specialist managers insist on buying these tools!

For checkpoint 13.1, automated tools can check whether link text is repeated for links to different pages (e.g. “click here”), or if the same page is linked to by different text. Again, compliance with this checkpoint can be achieved through user training.

Before a page was promoted from pre-production the reviewing editor must ensure that the markup is valid using the free W3C online validator. Additionally, the reviewing editor should check for appropriate structure (semantic HTML and Priority 2 checkpoints 3.5, 3.6 and 3.7).

Templates and CSS files should be validated in pre-production after each iteration. This is easily done using the free W3C online validators. Once the website was live, testing against checkpoint 3.2 would be required after a change was applied to a template/CSS file.

The processes above need to be backed up by user training and an enforceable accessibility policy that lays out requirements and responsibilities. If you believe that an automated assessment tool will bring peace of mind remember their limitations and plan for them. Personally, I don’t feel the need to pay for them.

Would you like to view comments on this article?


Sorry. Comments are closed.




Note: This is the end of the usable page. The image(s) below are preloaded for performance only.