Open source for designing next-generation digital hearing aids

No readers like this yet.
open source button on keyboard

At 64 Studio, we use the Linux kernel with real-time patches to ensure reliable, glitch-free I/O for our customers' demanding audio applications. Having source code and full control over the design of the system means that we can tweak the machine for the best possible performance on the target hardware. Typically, our end users are in the "pro audio" market--music production, recording, or broadcast. When an audio engineer switches on their new mixing desk, they probably don't realise that it's actually an embedded GNU/Linux device, albeit one that weighs a few hundred times as much as their Android phone.

Recently, we've been working on a rather different product which makes use of the same real-time Linux features that pro audio users already enjoy. We'd presented our work on real-time audio for mobile devices at the Linux Audio Conference in Parma, Italy in 2009. Following that presentation, we had an enquiry from Giso Grimm, a researcher on hearing augmentation algorithms at the University of Oldenburg in Germany. The trouble with designing next-generation digital hearing aids is that optimization and hardware miniaturisation are very expensive. If you pick a sub-optimal algorithm and build it into a hearing aid, you've just wasted a lot of money on a product that won't deliver. So researchers at the Haus des Hörens R&D facility in Oldenburg field test new algorithms on standard PC hardware, using a specialized multi-channel USB audio interface with I/O cables that connect to ear pieces.

Using a general-purpose operating system in place of highly optimized hardware presents a potential performance challenge. The PCs can run either GNU/Linux or Windows, but fortunately the real-time Linux kernel delivers better latency performance than Windows can. In a digital audio context, latency means the delay imposed by processing on the sound that the user hears. We can get away with a few milliseconds of delay, but if latency is too high, the brain begins to notice. The effect of excessive latency is not unlike watching a badly-dubbed movie, in which the lips of the actors are out of sync with the words; clearly, this would be unacceptable in a hearing aid field test.

At first, we considered building custom embedded hardware for the field tests, based around a single board computer with an Intel Atom processor. On reflection, it was decided that it would be better to be able to test very CPU-intensive algorithms before they had undergone any optimization. As a consequence, we targeted the Lenovo Thinkpad X200 notebook, due to its Core 2 Duo CPU in a relatively small and lightweight device. We were then able to build a minimal, high-performance, yet stable GNU/Linux distro, which we codenamed Mahalia, for the researchers to use on the Thinkpads. User reports indicate that Mahalia is performing well, and another round of field tests is due to begin soon.

Daniel James
Daniel James is the director of 64 Studio Ltd, a company developing GNU/Linux products for OEMs and R&D labs. He was one of the founders of the consortium, which promotes the use of GNU/Linux and Free Software in the professional audio field.


I wonder if the Android phones do not have everything necessary to build an Open Source Hearing Aid?

We need to lean into technology with huge consumer numbers in order to take advantage of the economies of scale.

Another point I would make is that young people are running around with all kinds of electronics hanging off of every part of their body. The cosmetic reasons for hiding the ear piece have largely disappeared.


I'm in. I'm a former electrical engineer with congenital deafness in my right ear and hard of hearing in my left. I'm sure we could come up with some kind of affordable solution!

Great Zac.

I am currently in USA visiting my mom who will be 100 years old this year. But in ten days or so I will be back home in the Philippines. I really want a collaborator and will also try to involve some young people from the University of the Philippines.

Great to see your interest.


Thanks Bob!

There are some hearing aid chips you can buy that sound pretty exciting to play with and might be the basis of an "open source" hearing aid just like the arduino microcontroller is for less specialized open source projects.

It sounds like all you need is the programmer, a microphone, a receiver and a battery and you're ready to go... Well you need to know how to program the DSP, which is assume is not for the faint-at-heart!

You know, another thing I'd love to do is accurately simulate hearing loss to raise awareness of hearing loss. The trick is to figure out how to completely eliminate sounds of a given frequency that falls below a given threshold.

I'd imagine you'd have to do an FFT of the speech signal, eliminate bins that are less than the threshold for a patient's audiogram and then reconstruct audio.

If it is done correctly, the playback of the modified audio would sound exactly the same to the patient but would probably be shocking to the normal-hearing spouse.

Leaving in a few minutes for the Philippines so I won't reply after this for a while.

Yes, great idea.

There are so many things one can do algorithmically (and nonlinearly.) I have ideas for parking markers of sibilants (high frequency hissing sounds) as low frequency tonal markers. The speech would not sound natural but would be very intelligible.

And so on,

More in a few days.

Maybe you should constact me by email: robert dot laquey at gmail dot com


Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.