A year in Cleanfeed accessibility

It’s almost exactly one year ago that Marc (co-founder of Cleanfeed) and I were invited for a thoroughly enjoyable discussion as guests to the Blind Podcasters’ Roundtable. We’ve made a steady stream of updates since then, so let’s follow up where we were then, and now.

In the live stream (produced using Cleanfeed, of course!) we enjoyed speaking to podcasters and audio producers globally who are doing regular productions but who happen to be from the blind and low vision community.

These producers interact with regular PC and Mac computers without a display; instead through listening to audio descriptions and keyboard control. If you’re a developer of any kind, and this isn’t something you’ve experienced in actual use then I’d strongly recommend spending a few minutes to see one in use by someone experienced. It’s immediately insightful and the first thing you’ll notice is the speed at which they are comfortable with spoken UI. Within a few minutes you’ll be scratching the surface of how easy it is to make good (or bad!) software for blind and low vision use.

At the time, we were pleased to be so warmly welcomed by the community, many of whom were already using Cleanfeed. Part of this is because Cleanfeed was rather usable with a screen reader. It was pleasing to hear from a respected BBC journalist, for example, how Cleanfeed was enabling his work.

A lot of this is down to a clear and clean user interface, something that’s always been a priority for us in any environment. This is a direct result our studio and live production background; any producer or broadcast engineer under pressure needs to be able to find functionality quickly, reliably and have it do the right thing. This applies in all cases, not just experienced users, and not just those interacting by a visual display.

So that’s good foundations, but it’s a long way from Cleanfeed being a truly accessible application.

Some of the visual effects we use create specific problems for accessibility. We learned that things like transparency or slide-out panes in the UI were leaving interface elements active behind them. People sent us videos of them using Cleanfeed with screen readers, sometimes using parts of the interface while they were switched ‘off’, with inconsistent or disorientating results. In more than one case, we saw people using the main interface whilst it was ‘hidden’ behind an active screen to invite a new guest!

Over the last 12 months we’ve tightened up the use of descriptive and annotated screen elements, to really clarify the path through the interface. Things like:

  • All buttons are tagged correctly, and annotated descriptively, aiding in screen reader navigation. They also help with regular keyboard navigation on the display too.
  • The key regions are highlighted to screen readers and these are probably the best way to navigate in and around the application. These regions are the navigation bar, main screen, your microphone, each guest. Check JAWS “web verbosity” settings if these don’t show up.
  • The various prompt screens are now “modal” dialogues, guiding you when one of them requires your attention, versus returning to the main console. This greatly reduces the clutter in the audio description.
  • Gradually increasing the contrast of text over time. Producers using the visual interface probably didn’t notice a change, as we introduced it step-by-step. But our stylish ‘dark’ UI was a little too stylish before!

This isn’t the end; there are still more improvements that can be made. Some of the remaining issues stem from differences between screen readers, or in how different people use the same screen reader software. There’s also the issue of keyboard shortcuts, which are favoured by a lot of blind users, but present a challenge with Cleanfeed. The possibility of a key press to “mute the guest” doesn’t play well with an environment where there can be more than one guest; and the concept of multiples also applies for most of the interface such as recordings, clips players and microphones. Some experimentation is needed here and if you have any suggestions, please let us know.

Everything we’ve learned so far is now a consideration in every new development of Cleanfeed going forward, and we expect we’ll continue learning.

One of the things we have wanted to avoid is having a substantially different implementations of things for audio navigation. Just as ring-fencing Cleanfeed producers into different ‘classes’ is something that’s never seemed right. There’s too much crossover between the requirements of a radio producer, podcaster, journalist etc.; and a well-designed tool moves smoothly between the spectrum of the people using it. The differences between interacting with Cleanfeed with a display or audio can be significant in places, but for now improvements to one type of interaction are continuing to help with the other, and vice-versa. So they’ll remain developed together as one, not separately.

At this time, we’ve learned a great deal of how to design a web application so it’s accessible. At some point I’d like to share thoughts in depth from the point of view of a developer on HTML, JavaScript, ARIA and some of the quirks that happen in practice. If you’re interested in this, keep an eye on this blog and our social media channels.

Thanks for feedback that resulted in these improvements, especially to Óran O’Neill and Jonathan Mosen for their informative discussions and testing; and Mohammed Laachir at Vispero for assistance with the JAWS screen reader.