A few dozen accessibility enthusiasts assembled at the Microsoft Digital Eatery Unter den Linden, to mingle and pick up fresh inspiration from two presenters sharing some of their practical knowledge. This was already the 6th event of the A11Y Berlin meetup group.
I can’t believe it’s really been over a year since I first and last attended the Accessibility Berlin meetup; the dates never matched my calendar. But this time I managed to drop in, and I again enjoyed the laid-back atmosphere and welcome inspiration from the talks. This is a well-curated series of events, attracting a very pleasant crowd.
Neither talk was overly technical or “geeky”. Quite the contrary: if I were to summarize them under one headline, I would choose “a common-sense approach to low-hanging fruit”. Both were great examples how accessibility is not all about nerding out over code, but it is just as much about process, mindset and making proper use of available tools (Stephanie Nemeth shared her sketchnotes).
The 30% of automated testing
The first talk, after a short presentation by evening host and sponsor Microsoft introducing their Microsoft Inclusive Design Toolkit, carried the title “No more excuses – Getting the first 30% of accessibility issues” (slides). Tim von Oldenburg told how the gov.uk team – well-known for their great accessibility work – ran an experiment of building the world’s least accessible website and testing it with common automated a11y testing software. It turned out that only about 30% of the accessibility issues were discoverable.
Now, what could be a depressing number, Tim turned around as a positive result: after all, this implies that eliminating 30% of accessibility issues is a baseline everyone can achieve. No excuses – hence the title. He then went on to present six steps of how to reach those 30%, and beyond.
In addition to the general reminder that improvement starts with education – building competence for oneself, in the team, and as an organisation – he also suggested that many tools already in use can be applied to discover usability issues as well: if a team uses Selenium for testing, make use of its ability to test keyboard accessibility; when using Lighthouse, peek into their Accessibility audit tool; while designing UIs in Sketch, install a plugin that warns of low colour contrasts; etc. This is such great advice, and – as were to surface in the Q&A afterwards – is one of many ways how teams do not even need approval from business owners: the tools are already in daily use.
Other suggestions were related to the next step: eliminating the issues discovered. While test results can often be rather overwhelming, dealing with one site section after the other, or no more than one type of problems at once (e.g. first fix all occurrences of missing alt texts), can make the task less daunting. The talk also highlighted how making results actionable is important; instead of outputting dry test reports, tools that visualise results are much more easy to comprehend (not part of the talk, but this reminded me of an example that Gunnar Bittersmann had shared two days earlier; some of these things are surprisingly simple to catch even with a few lines of CSS).
Most importantly, though, the speaker encouraged to test under the most realistic circumstances possible – making automated tests somewhat representative of the real world. And, he finally added, “aim to go beyond the 30%”: while automated testing is indeed a valuable tool, nothing beats going out to test with real people.
Accessible PDF – good tools, little awareness
The second talk was one I had particularly been looking forward to: “Accessible PDF” (slides). While the accessibility of PDF documents is a topic surfacing more often recently (compliments of EU Directive 2016/2102), I had never found the time to dig very deep. The speaker, Klaas Posselt, is likely one of the best to deliver such introductory talk – after all he is currently co-authoring what he himself described as the world’s only comprehensive book on PDF accessibility (in German).
The core issue with PDF is that, unlike the web, the format was invented with the intention to ensure that documents always look exactly as laid out. And while PDF is really good at that, it also means that even reading a PDF on a smartphone renders it difficult to access; not to speak about assistive technology like screen readers or e.g. the inability to change the typeface to a dyslexia-friendly font by the user. In this regard, Klaas described the status of PDF “like the web ten years ago”. The two-year old PDF 2.0 standard will bring powerful a11y features (incl. responsiveness), but is not yet widely adopted.
Klaas gave a rundown of common strategies to make PDF accessible. While the so called “Reflow” technique – making text behave like HTML – comes with a lot of flaws, the way to go is called “Tagged PDF”: just like in HTML, Tagged PDF contains invisible tags for semantics and reading order, alt texts for images, and more. And while the tools to create tagged PDF are actually pretty good, the problem is commonly in the source document: if the original document is not created with accessibility in mind (e.g. using word processor styles to add semantics, and marking up images with alt texts), no PDF software can create an accessible file from that.
“Use the tools the right way” is hence the most important takeaway from this presentation. Using templates, using styles and semantics in Word documents, adopting workflows to use the correct fonts and hyphenation that does not introduce dashes to text, and selecting “Tagged PDF” when exporting – these are some of the steps that go a long way. And while MS Word, LibreOffice and even some of the server-side PDF generators get good grades, this also means that creating a PDF from Google Docs or LaTeX will not be accessible (neither of those seem to support Tagged PDF as of today).
The talk also hinted towards tools for testing. PDF Accessibility Checker 3 was named as the main recommendation, but also Acrobat Preflight and Adobe Acrobat contain some checking tools; the Matterhorn protocol, on the other hand, is a checklist for PDF accessibility.
And by the end of talk #2, we had come almost full circle: Klaas, too, talked about percentages that are easy to reach. In the case of PDF, he suggests that 80% of PDF accessibility are rather easy to achieve. And in the Q&A, even the limitations of automated testing surfaced once more: not surprisingly, the issues are the same as with websites – while some issues can easily be detected, others are invisible to automated tools (e.g. when text elements are simply marked as “artifacts”, i.e. semantically irrelevant artwork; this will pass the test, but the text would be entirely inaccessible).
Maybe the most disconcerting answer, however, was to the question whether commercial publishers of PDF ensure accessibility: turns out creating accessible PDF is not widely considered in the publishing industry. This almost sounds like the state of accessibility on the web is gold compared to PDF.
A personal note: I am currently preparing a proposal for a “Hacks/Hackers Berlin” meetup on the accessibility of information visualisations and interactive maps. Do you have hands-on expertise or know somebody who has? Please get in touch, we’re looking for speakers to talk about inclusive design in data journalism.