Meeting the diverse needs of your site’s visitors is likely to mean a great deal more than ticking off individual accessibility checkpoints.
You cannot just rely on an automated accessibility parser. As Grant Broome explained, whilst automated testing is useful, it cannot replace a manual review or direct user testing. At Accessites, every site that meets our entry criteria is subjected to a manual review by a panel of Team Access members.
At no point do we rely on automated testing alone. Why?
Because we know, only too well, that automated parsers are fallible and that the conformance levels reported by these tools are rarely accurate. So what do we look for? And what points, if any, signal that a site is likely to fail our minimum levels for a showcase award?
First of all, we have to say that all the sites that make it into our review queue not only use valid code but look pretty good. Most have plenty of information on a front page that has been well laid out, Balanced colour themes are used to brings all of the page components together into an easily navigable whole. But if there is one thing that is likely to start the alarm bells ringing immediately, it’s the inclusion of a claim for WCAG 1.0 Triple A conformance. We’re all jobbing designers. We know just how difficult a solid Double A conformance level is to to achieve - let alone a purely theoretical Triple A! So as soon as we see such “impressive” claims, you can bet that we’ll start actively looking a lot deeper.
Automated parsers are dumb, binary, tools. Most of the time, they can only determine whether you have supplied an alt attribute and whether that attribute is populated with text characters. What they cannot determine is whether that text, in any way, shape or form, acts as a true equivalent for the graphic it is associated with. For example, the corporate logo on the Acme web site is accompanied with an alt attribute that simply says “Logo”. Bzzt! No prize for that one unless the graphic really does include just the single word “Logo”.
If a graphic contains words, the alt attribute must also contain those words. The Guild of Accessible Web Designers (GAWDS) logo is a good example of accurate alt text in this situation. Acme’s logo graphic may be given a green light by an automated parser but it wouldn’t get the Accessites’ OK. Conversely, if your alt attribute merely replicates text that is already on the page — such as link text or a heading — we’ll class it as redundant, unacceptable, noise. So choose your alt text wisely and not just to satisfy an automated tool.
It is common these days to add graphics using Cascading Style Sheets (CSS). Obviously, this circumvents the provision of an alt attribute but it does not sidestep the developer’s responsibility to provide a text equivalent for graphics that contain important information. So if you’re going to add your corporate strapline using an image replacement technique, make sure that the same information is available without CSS because we will check.
There’s only one thing to remember with navigation menus — they have to be usable. By random site visitors — not someone who has been working on the site for 3 months and knows it inside out. If you’re going to make use of icons to support reading, ensure that the graphics are large enough to see and conceptual enough to be easily understood without having to resort to explanations. Again, you may not be the best person to make this judgement call. You probably know the site too well already. Consider investing in some informal hallway testing before settling on your final icons. Keep in mind that a Team Access member who is frazzled by confusing navigation and mystery icons is unlikely to be in the frame of mind to award top marks.
All Noise And No Signal
We’ve already mentioned unnecessary redundancy in terms of alt text but the single biggest offender in the redundancy stakes is usually the humble title attribute. All too often, we see title attributes added to non-graphical links that merely echo the link text.
Why? Who are these title attributes supposed to help?
If your links need explanations, simply repeating them isn’t going to help anyone. You’d be far better of re-drafting your link text so that it doesn’t require explanation but can stand alone and still be understood. Automated parsers aren’t going to help you here. Nor are there any checkpoints that relate to where, or when, to use the title attribute. You have to decide. Ideally by considering each and every case on its own merit. We will.
Colour, Colour, Everywhere
Here’s another thing with automated tools. They can’t see. Yes — you can use colour contrast tools that warn if the contrast between background and text is too low (and even the odd tool that warns if it’s too high) but, by and large, automated tools are, quite literally, colour blind. They cannot tell the difference between a tasteful, easily readable, page and one that has links in more colours than Joseph’s coat and has you reaching for the sunglasses. Users, on the other hand, may be colour sensitive to the point where some colour combinations are unusable. Team Access, for example, don’t work effectively whilst wearing sunglasses. So choose your colour combinations with care and do not assume that everyone will be able to distinguish colours in the way that you do. Examples might include supporting coloured links with underline and reinforcing mandatory form fields with additional notation (”*” or “required”).
You’ve just finished the Contact Form. All the form labels are in place. You’ve ensured that all error checking takes place server-side and all of your red mandatory input labels are additionally highlighted using “*”. To round off a perfect day, your form just scored a Triple A rating on one of the automated accessibility parsers. What could possibly go wrong?
You may think that this is stating the obvious but did you make sure that the message indicating that “fields marked with * are mandatory” came before the form rather than after it? Logically, users do need that information before they try to complete the form but you might be surprised how many times we’ve seen this important text placed after the form it references. Then there’s all the little explanatory messages you added to support users. Things like the format for entering dates or whether their email address will be displayed. Are these messages associated inside form labels or just marked up as plain text? As you might now expect, none of the automated parsers will highlight this kind of plain text within a form. Yet any text within a form that is not associated with a form control may not be rendered by screen reading software in Forms Mode (e.g JAWS). So many of your screen reading users may not hear these messages and, consequently, have problems completing your form. Other may need to make at least 2 passes (once Forms Mode and a second time using their standard mode) to get all of the available data within the form area.
And while you’re there, what happens if you submit a completely empty form? Is a suitably informative error message generated? Because submitting an empty form is exactly what Team Access will do.
One warning that you will regularly see when using an automated parser relates to providing skip links. But it’s just a warning (not a failure) and you’ve already provided a hidden skip link, so you don’t need to worry. Right? Wrong! Simply providing a skip link isn’t enough. It has to be usable too. Which means it has to be visible for at least part of the time. Not all keyboard navigators have poor eyesight. Some might use Voice Recognition (VR) software. Or action buttons which function like a 1 key keyboard. Or be using a mobile device. All of these groups can rely heavily on effective skip links.
Team Access will try to navigate your site using keyboard alone. So if we can’t actually see your skip link(s) or you’ve used iconic-links that don’t mean a thing to us, we will notice. If it’s any consolation, we probably won’t test out all of your accesskeys. Primarily because we’ll invariably have to find out exactly what keys to use first. Which means that we’ll have to navigate to the page that lists your accesskeys. By keyboard. Without using accesskeys. But when we do get there, we will make a note of just how usable they appear to be. Long lists of accesskeys may seem like a good idea but they don’t cut much ice with us and increase the potential for conflicts with keyboard shortcuts for other software.
Tab ordering is another common warning from automated accessibility parsers but none of the parsers can indicate when your efforts to enhance accessibility might have, inadvertently, created a whole new problem. The word “ordering” should be a clue. It implies “logic”, “organisation” and “sequence”. All of those words that make us feel that a site is less chaotic and behaves a bit more like we’d expect it to. And users do have strong expectations here. In the West, we read from left-to-right so users intuitively expect their progression, through the links on a page, to follow the same logical sequence. Elsewhere, reading may be from right-to-left or top-to-bottom and sites using these languages should, ideally, behave accordingly. So a sudden prioritisation of a link or control halfway down a page takes everyone by surprise. As the developer, you may have thought that you had good reason to alter the ordering of links on a given page. From the user’s perspective (and this includes Team Access), it comes across as a sudden whim which, far from assisting users, just confuses the heck out of them.
And never suddenly reverse tab ordering without a very good reason. If you feel that the menu sub-item on the far right should have the highest priority - reverse the menu not the tab order! Confusing the team that is actively reviewing your site is the last thing you should want to do.
The one automatic tool that you can usually depend upon is a markup validator. It will pinpoint every markup error and every mis-nested element with pinpoint accuracy. So, obviously you don’t have to go beyond validating your markup, do you? By now, I think you know what the answer will be. Good though they are, markup validators are still just pieces of dumb software. They would not know a quotation from a hole in the ground — let alone be able to determine if it was marked up correctly using
blockquote. Nor can they distinguish a list from any other plain text. Or identify the inappropriate use of pretty much any markup. Semantic markup requires that the text first be classified correctly and, currently, the only “software” that can do this resides in the human brain. So, even if your markup is valid, you still need to check it yourself to ensure that it is applied semantically. And if you find that task daunting, just remember that Team Access tease apart the markup on every site we review. So it’s a task well worth doing on any site you intend to submit to us.
And Last But Not Least
One final Team Access touch that no automated parser can provide. We routinely test sites for 404 errors. We want to see exactly what will happen when a user follows a broken link from another site. Can we find our way back to your Home page easily? Is there a link to your site map? If the broken link was on your site, could we contact you about it? We hope so.
But then, if you have managed to read this far and are prepared to follow through in your own work, we’d like to think that, some day — in the not-too-distant future — we’ll be contacting you about your next Accessites award.