The state of accessibility in Linux and open source software

No readers like this yet.
Open field

Opensource.com

Spencer Hunley is an autistic professional, former Vice Chair of the Kansas City Mayor's Committee for People with Disabilities, and current board member of the Autism Society of the Heartland & ASAN's Kansas City chapter. In August, Spencer will be giving a talk, Universal Tux: Accessibility For Our Future Selves, at LinuxCon in Chicago. He also gave a talk, Maximizing Accessibility: Engaging People with Disabilities In The Linux Community, at LinuxCon North America 2013.

In this interview, Spencer provides an update on the state of accessibility in Linux and open source software.

What have you been doing in the area of accessibility since I saw you speak at LinuxCon last summer?

Shortly after LinuxCon last year, Carla Schroder started a Google+ community called Universal Tux, and I've been very active as a moderator with three others and Carla. We've created and updated a list of priorities when it comes to accessibility, which includes a few things that are relatively simple to fix, as well as a few loftier goals.

Spencer HunleyWe're starting to focus on documentation, as a lot of distributions may have built-in accessibility, but the information on how to utilize it can quickly become out of date due to application and kernel updates. This also applies to accessibility applications and programs, for roughly the same reasons. Documentation is an area that fortunately many people can get involved with, even if they are unfamiliar with programming or code. I think Universal Tux is gaining some traction, and we're getting noticed. Daniel Fore, with Elementary OS is actively participating in discussions, and a few others from other distros are joining in as well.

On my own, I've been between jobs for a bit. Thankfully, that seems to be stabilizing. I've been trying out some different distributions: Jonathan Nadeau's SONAR distro, and Vinux. Staying current with accessibility is not easy, but it's a wonderful feeling when you watch things move so quickly in the right direction.

I also recently joined the local Kansas City chapter of the Autistic Self-Advocacy Network, and am a current board member of the Autism Society of the Heartland.

Which open source accessibility projects and people are "the ones to watch" right now, and why?

Dasher, an application that allows text input through a variety of means using an interface that assists the user via text prediction—they call it a probabilistic predictive model. It's available for both Linux and Windows, and is simple to use, almost like playing a video game at times. It's a great program that is expanding the variety of input devices, for example, using breath or a tilting platform to manipulate the interface, and can sometimes exceed traditional keyboard typing speeds with practice. Look for Dasher to keep innovating, especially as a cross-platform application.

Orca screen reader/magnifier is still a perennial favorite with a lot of nice features. It hasn't aged very well due to various development cycles among other applications, but the development team, led by project lead Joanmarie Diggs, is working hard at fixing the bugs and improving Orca's versatility. One of the more prominent points of contention is that Orca can't make an inaccessible piece of software accessible; sometimes this is a simple case of implementation of accessibility support or merely updating a few lines of code. This project has been around for a long time, however, and is a staple for those with visual impairments.

Also Andy Lin's work and promotion of Google Glass as assistive technology. Making Google Glass accessible and using it as assistive technology is a magnificent idea, with significant implications for people with disabilities. For those with hearing impairments, it can provide real-time text translation of what someone is saying. For those with visual impairments, it can transcribe the environment around someone, as well as what is in front of them. Using voice control, people with mobility impairments can use it to operate a smartphone, computer, etc. And it can supplement other assistive technology devices by providing feedback, all at a cost that is surprisingly cheaper than many assistive technology devices sold on the market today. Also, because Google Glass is relatively mainstream and used by people regardless of their ability, it is detached from any stigma many AT devices are saddled with, promoting inclusion for people with disabilities in society.

Enable Viacam (eViacam), similar to GNOME's MouseTrap applications, allows the use of a mouse by just moving your head. Working with a computer and webcam, it's a FOSS alternative to much more expensive options for those with mobility impairments. Eviacam, combined with Dasher, provides an attractive alternative to proprietary AT software.

James McClain, Palaver and VoxForge: Speech recognition has been a top demand for those with physical, mental , and developmental disabilities, but always seems to fall short of expectations. Last year, James McClain released a public beta of a novel idea: using Google Voice APIs on the backend. With some excellent features and continuing developement, Palaver and VoxForge could bring quality speech recognition to the Linux desktop. In this same vein is LiSpeak, a voice command system for Linux distros that's built on the base of Palaver.

The Open Prosthetics Project: As 3D printers become cheaper, their potential for use has expanded; Jonathan Kuniholm of the Open Prosthetics Project, also the keynote at this year's LinuxCon North America, endeavors to provide designs for efficient and cost-effective prosthetics that people can assemble and use independently. What caught my eye was their motto: "Prosthetics shouldn't cost an arm and a leg."

Which accessibility features or services are still missing in open source operating systems?

For mainstream distros and users with disabiities, Speech-to-Text and Text-to-Speech interfaces come to mind. For many, Dragon Naturally Speaking is the one remaining obstacle in their path to Linux. Specifically, a core framework that can handle spoken commands and dictation, and also work with Braille displays would be greatly welcomed.

Documentation, even though not a disability-specific issue, is very important. After all, many people with disabilities have no idea about Linux or FOSS software/operating systems, and up-to-date information is vital to being able to utilize any built in accessibility. The Accessibility-HOWTO is in desperate need for a complete replacement of its current state. The bright side is that this is relatively easy to fix, and wouldn't require severe modifications of code.

Built-in, easy to use and understand accessibility support is hard to find in many distributions. Can you tell me the key combination to activate that support in Ubuntu? How about any other distro? The fact is that although it's there, it may not be easy to locate and/or use. When addressing this, focusing on independence is vital. No one wants to have to call upon someone else to help them install a new OS, or to utilize an application. This is especially true for people with disabilities; the learning curve can be nearly impossible, which leaves little in the way of choice in the FOSS world, depending on your abilities.

Finding accessibility projects is easy, but finding ones that are consistently developed is difficult. That isn't really anyone's fault; frankly, other projects become more popular, more important, and used more consistently by a large user base; however, as more accessibility projects start to intersect with mainstream dreams of future technology (voice control, speech, recognition, home automation, etc.), I am optimistic that they will receive more attention, and hopefully, more development.


 

To get in touch with Spencer Hunley, you can:

User profile image.
Rikki Endsley is the Developer Program managing editor at Red Hat, and a former community architect and editor for Opensource.com.

4 Comments

I have tried to convince my vision impaired friends to try Linux without success. Main reason for resistance is poor documentation which makes learning hard, instability (e.g. orca stops reading screen) and developers hunger for new frameworks and rewriting everything "from scratch".

Ubuntu/Vinux has many tools that work _somehow_ with screen reader, but actually are quite unusable. In Gedit and Nautilus not all controls are read by orca. So user cannot be sure which particular window/tab is activated. From tests with Firefox we found that gmail is read better in Windows than in Linux. Probably with hard learning better success could be got using just terminal, w3m and vim. But again — you cannot even select all text and copy in gnome terminal without mouse anymore because they have lost hot keys for that!

Another problem with orca is switching from one language to other. I have developed scripts which use expect command to pass necessary parameters using command line but it is hack.

Main showstopper was closed source program Skype, which works well on Windows, but on Linux you can't even install it without sightful assistant. And even you install it, you cannot use it properly with orca.

If any open source developer cares about accessibility I recommend to develop good command line interface for program, or at leas test it using just listening Orca without any looking on screen. I bet he will found places which are not reachable with hot keys/tabs and arrow keys.

[Apologies for duplicate post - forgot about the 'reply' link]

Valdis Vitolins, you make an excellent point. A lot of programs are barely passable using Orca for accessibility, and even then they're not entirely usable. In many ways Linux is a bit behind in accessibility for persons who are blind and have mobility issues. Command line interfaces were great decades ago for people who could not see the screen since all the screen reader had to do was read text, but now they have to contend with the complex and advanced graphic environments available today. I don't think we should roll back the tide of progress, but at least maintain and add some functionality and accessibility to the most basic of interfaces.

In reply to by Valdis Vitolins (not verified)

For me, the big problem with accessibility tools is the distance between a graphical interface and a oral/aural interface, instead of trying to screen scrape or use leftovers from the gui and make something fit, use an API to drive and read state or content from the application.

In my work with speech user interfaces, I cannot tell you the number times I wish I had internal information about the application and how it interpreted the data so I could write a proper grammar instead of a hacked up keystroke injecting toy

Brokenhands, that is an excellent idea. I have tried to use screen readers and other tools with little success, and this seems to be the main culprit - the reader will try to 'read' parts of the GUI that either aren't readable or are not meant to be read, bringing noise and unwanted information that can confuse and confound the user. This is the problem I run into all the time with Orca and other screen readers.
I agree that there is a big difference between visual and auditory interfaces - with visual you can pack in a lot of information in a compact space; not so much with aural interfaces. I think that is the greatest challenge: creating an auditory interface that is as useful and flexible as our current visual ones. Perhaps we're trying to augment the GUI when we should create an AUI instead.

In reply to by Brokenhands (not verified)

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.