WAI-ARIA

= WAI-ARIA findings =

I've done a lot of reading about WAI-ARIA recently, and a certain amount of work on it (see bug 1016), and some further experimentation, and here are some conclusions which I am putting in the ticket so they won't get lost. These are gross generalizations, but since a lot of the documentation was overly specific, I'm using these gross generalizations as a way to make life much easier for those of us who are implement them.

landmarks, live regions, form controls
The two main benefits of WAI-ARIA come from landmarks and live regions. Additionally, there are some interesting form controls

Landmarks mark the page into main, navigation, form, checkbox, etc. regions. In other words, it can be as gross as "this is the banner, this is the footer, and this is the content" or as fine-tuned as "this is a checkbox, this is the text accompanying the checkbox, this is the button that submits the form associated with that checkbox". Right now landmarks have limited utility. I know that JAWS 10 can give you a list of landmarks with the semicolon and allow you to navigate directly to a landmark, although because they are so new, right now some JAWS users are primarily confused by the extra verbosity. The Firefox mouseless browsing extension uses landmarks to enable direct access to drop down lists, which is a great feature. I'm not sure where else landmarks are exposed in any other user agents (browsers, screen readers, etc.).

Live regions allow the user agent to announce when the page has changed dynamically. Unlike Landmarks, I can't think of a utility for these that is not specifically for screen readers, although I'm sure there is one somewhere. In the unsubmitted patch to bug 1016, I've coded live regions for the tooltips. When a user browsers to one of the text entry field on the form, a yellow boxed tool tip appears to the right of the text entry field explaining what it is and how to use it. WAI-ARIA live regions allow screen readers to know that that change might happen and that they should announce it when it does. Now, when a screen reader user navigates to one of those text entry fields, the screen reader announces the tool tip. These live regions can be set to either "polite" or "assertive". The screen reader, based on its own programming and perhaps user-based configuration, will decide when to announce changes for polite or certified regions. For example, an error message (encoded as a WAI-ARIA "alert") would be assertive, and might get announced immediately. Meanwhile, a tool tip could be "polite", and might not get announced until the next natural silence in the screen reader's actions.

Form controls I know least about, because I've read about them but have only coded a few of them (see bug 1016). Not only can all elements of the form be encoded with landmarks, but they can be marked as "required", and their components such as list elements can be marked as well. I'm pretty sure for my testing that JAWS 10 does react in some ways to these markings, but I haven't tested them consistently.

live regions first
At first I was focusing my efforts on landmarks, because they are somewhat easier and more straightforward to code, but I think a lot of my frustration came from the fact that live regions and form controls have more immediate utility.

I think the next part of this bug should be identifying all of the forms and live regions on the site.

Forms of course include not just main content page forms (e.g. the user creation page modified in bug 1016), but miniature forms, such as the login that's part of navigation. We can mark up all of the forms with form-based landmarks and form controls.

Once we identify live regions, we will need to identify which live regions it is reasonable to mark as "live". For example, when a user picks a mood or a userpic, a JavaScript enabled page with images enabled change the displayed image as a live region. However, is it reasonable to ask a screen reader to announce this change? After all, the user has just picked the user pick or mood icon from a text-based drop-down list, which the screen reader has probably confirmed by reading aloud the chosen element. Is there any added benefit to announcing an image which has changed via the same text-based shortcut?

In some cases, this investigation of what should be marked as "live" might make us rethink some of the JavaScript coding. A JavaScript-enabled browser makes certain features available in a pop up when mouse over happens to a profile icon. If that only happens on mouse over, there's probably not a great reason to mark that region and live, since most screen reader users are unlikely to be using the mouse. However, perhaps that pop-up should happen on object activation, not only on object mouse over.

useful web resources

 * http://wiki.codetalks.org/wiki/index.php/Web_2.0_Accessibility_with_WAI-ARIA_FAQ -- a good introduction. I don't go back to this as a reference but it's a useful tool to have around. Read in conjunction with http://www.w3.org/WAI/PF/aria-practices/
 * http://www.w3.org/TR/wai-aria/ -- this is the w3c spec, with all of the flaws and benefits that any w3c spec has. I couldn't have done any of the coding of done without it, but you can't *learn* something from a spec like this. Use it as a reference.
 * http://dev.opera.com/articles/view/introduction-to-wai-aria/ -- excellent introductory lesson. Without this I would never have started writing my own WAI-ARIA. Pretty much anything Gez Lemon says about WAI-ARIA has been useful to me.
 * http://www.paciellogroup.com/blog/?p=106 -- introduction to landmarks.
 * http://www.marcozehe.de/2009/07/01/the-wai-aria-windows-screen-reader-shootout/ -- one of many comparisons which shows you how each of the various screen readers have support for some but not all WAI-ARIA roles
 * http://wiki.codetalks.org/wiki/index.php/Set_of_ARIA_Test_Cases -- a suite of WAI-ARIA test cases. I have found these less useful for coding and more useful for seeing what ever user agent/screenreader I'm using does when confronted with certain WAI-ARIA programming. If I am beating my head into the wall because JAWS or NVDA isn't responding to the code I'm writing, and might be because I'm writing bad code, but it might just be because the screenreader doesn't handle that WAI-ARIA yet. This lets me check.

Testing tools
A small number of tools which know about WAI-ARIA are available for free.


 * http://www.firevox.clcworld.net/ -- an extension for Firefox which builds screen reading functionality into the browser
 * https://addons.mozilla.org/en-US/firefox/addon/879 -- mouseless browsing, for keyboard-based navigation
 * http://www.nvda-project.org/ -- open source Windows-based screenreader
 * http://live.gnome.org/Orca -- screenreader on Linux (part of GNOME) -- I haven't tried this because GNOME is not accessible for me.
 * http://www.dolphinuk.co.uk/tryit.asp?id=1 -- windows-based supernova has a free demo which I have never tried
 * http://www.satogo.com/ -- windows-based free service which does screen reading. I haven't tried this
 * http://www.gwmicro.com/Window-Eyes/Demo/ -- 30 minute timed demo of the Windows-based window-eyes