Published on: 27th May 2021
With accessibility coming more and more to the forefront, having been an afterthought for the longest time on the web, there are lots of questions being asked.
At Frank, accessibility is something that we’ve really enjoyed having a greater focus on, and we’re often asked a couple of the same questions – truth be told, the answers are usually longer than the time we have to give them.
Who's the most accessible?
Honestly, I don’t know! For the reasons noted throughout, that’s a difficult one to measure.
But I'd stress that web accessibility isn't a competition, and probably shouldn't be treated as one. There are a few issues with viewing accessibility this way, and there are two that stand out to me:
- It's too easy for the focus to become your 'competitors', not your users - who our focus should be on when thinking about accessibility.
- There isn't a level playing field from the start. Most websites aren't equal in functionality or design, their requirements or their user base - at their best, each is a fine balance.
At its most basic, a black and white website with a few pages and only the most important information is a highly accessible website. It would probably tick every WCAG (Web Content Accessibility Guidelines) 2.1 box, even the AAA standards. But this website also forgoes pretty much everything else - design, functionality; there is none.
Contrast to that a beautiful website, one which pushes the boundaries of web technologies and provides users with a completely new experience. That website will be infinitely more difficult to make accessible, in some cases impossible, leading to different or auxiliary solutions. There's not much point comparing these sites, even if the first wasn't so sparse; the difference between the thought and effort required for each remains vast.
While competition is great encouragement (who doesn't like winning?), the pursuit of 'winning' is sure to distract us from the original goal - as the pursuit of winning usually does. At the start we set out to make websites a website as accessible as possible for it's users while balancing the design and function alongside - so it remains the goal throughout.
Automated or manual accessibility testing?
Nobody would deny that automated testing is a great tool. Give an automated accessibility checker a series of webpages, and it'll find a lot of the small issues that it would be difficult for a human to find, or that a human might even miss.
Add to this that there are lots of options out there, many automated testing methods both free and paid for, proprietary or more open-source. While they'll all claim to be the best, the reality is that most of them are great at what they do. They take black and white accessibility issues, run a series of pre-defined checks against them, and advise you whether they're a concern or not.
But accessibility isn't all objective. Even in best-case scenarios, there is an element of subjectivity.
There are rough estimates bounding around for how many common accessibility issues we can successfully run automated test to find. On the lower end these hover at about 40% - though a more generous number that most people might consider is 50%. I'd even say that through our experience, given that we're not often testing for AAA standards, we might be able to find as much as 60%. All to say that for some accessibility issues, usually around about half, we simply need a human to determine whether or not it is accessible.
Images are a great example of this. Let's take alternative text... With few exceptions, images should have alternative text. Computers can certainly test this, and they can determine where alternative text is missing.
However, where there is alternative text, they can't determine how relevant that alternative text is. A picture of some apples, for example, with the alternative text of 'oranges' would be fine as far as most automated tests are concerned - the image has a text alternative. But the alternative text is at best inaccurate - at worst, it's misleading. Many would consider this less accessible.
A similar concern relates 1.4.5 (Images of Text) of WCAG (Web Content Accessibility Guidelines) 2.1. If I have a poster on a page, there's no way for automated testing to anything other than it being an image. It can check for alternative text, but if it's a poster with a lot of text, even alternative text probably won't be enough. We need to check for a full text alternative that may be separate to an image, taking the context of the page into consideration - that which automated tests can't do.
And where we do provide accessible alternatives, making the original inaccessible item acceptable, it will still be seen as an issue among automated testing, marked down as such - context being crucial once more.
To that point, we do both at Frank. We'll always start with a series of automated tests - it helps us find a representative sample of pages that need to be tested against, and catches a lot of that low hanging fruit that - in reality - there's a good chance we'd miss if we were trying to do everything manually. We'll always double-check those results, not in a line-for-line sense, but see what does and doesn't make sense, see what issues are repeated throughout the site, what might be unique to one piece of functionality or one page.
And we'll follow-up with manual tests, the sorts that computers can't do. We'll test with assistive technologies, we'll navigate without mice, and we'll experience the site ourselves, doing our best to ourselves in the shoes of the user. There are also points of the WCAG (Web Content Accessibility) guidelines that we're aware we need to manually test for, before we can confidently say whether or not a website complies with it.
I'd hesitate to put a number to each - as noted, at a guess we could put a 60/40 split between the testing methods, but it's not a split we've ever measured. The reality is that we need to do both regardless.