Why Intel made Stephen Hawking's speech system open source

No readers like this yet.
Where are IT innovations coming from?

Opensource.com

Intel has announced the release Stephen Hawking's speech system as open source, encouraging innovation and improvements that could open up the technology to people with physical disabilities throughout the world.

Stephen Hawking, who is probably one of the best scientific minds of our time, was diagnosed with ALS at the age of 21. This slowly paralyzed him and eventually took his ability to talk, but with the help of a unique speech system, he found his voice again.

In 1997, Intel designed a unique speech system that allows Hawking to communicate with the world. In 2004, text-to-speech functionality provided by NeoSpeech gave him his iconic voice.

The speech system was closed source and proprietary licensed, which meant that only Intel engineers could work on improvements. Everyone else who wanted to improve the technology had to build their own system from scratch—until now. Late last year, Intel announced plans to open source all code for Hawking's speech system. The company released it on GitHub earlier this year under the Apache License, version 2.0.

How Stephen Hawking's speech system works

The speech system is comprised of three main parts, which work as follows:

  1. An infrared sensor on Hawking's glasses, which senses movements in his cheek.
  2. The signal from the infrared sensor is then sent to a software platform, which allows Hawking to navigate the system without using his hands. This can be used to perform a range of tasks, including moving a mouse or using a virtual keyboard.
  3. Text-to-speech functionality, which takes the text he writes and turns it into speech in his iconic voice.

Why make this technology open source?

By releasing this technology as open source, Intel has allowed anyone in the world to make improvements to the technology. With a computer, an idea, and some motivation, you could make improvements right now.

Prior to the release of this technology as open source, any engineer outside of Intel would have had to design the entire speech system from scratch just to make one small improvement. Now, with all of Intel's hard work out in the open, it is easy for anyone to build on top of the technology that has already been designed.

This should dramatically increase the rate of improvement of the software, which will result in better quality and more accessible technology.

Who can this technology help?

Unfortunately, not everyone with motor neuron disease has access to or the funding needed to get a speech system like this, which can render communication impossible. But with the release of this technology as open source, the future looks bright. As the rate of improvement and accessibility of this technology rises, more and more people with life-changing physical disabilities will be able to use it and begin to communicate again.

In today's world, there are over 3 million people who live with motor neuron disease and quadriplegia who find it difficult or impossible to communicate. professor Hawking's speech system is able to be adapted to suit the physical abilities of each person. For example, the technology can be developed to react to blinking, eyebrow movements, touch, and other subtleties.

Learn more

Want to improve professor Hawking's speech system?

  • Find the source code here.
  • For more information about the project, or to contact one of the lead engineers at Intel, click here or take a look at this presentation.
  • Learn more about Intel's speech system here.
  • Click here to learn more about the text-to-speech functionality of Hawking's system.
Tags
Sarah Pratt, NeoSpeech
Sarah loves the idea of open source technology and is excited to see the advancements that are made due to making technology open source. Sarah works for NeoSpeech, a leading Text-to-Speech company, and manages their blog Text2Speech, which occasionally posts about open source speech technology.

5 Comments

Thanks for posting this.
My father passed away recently and was unable to communicate for the last week of his life although he was obviously lucid and I could see in his eyes the frustration and horror he felt because of his innability to communicate. having experienced this I realized that this intolerable situation is much too common.

Since then I've been knocking about trying to find (or make) an economically viable solution to this terrible condition as there seems to be a decided lack of progress in this regard. While I can find plenty of bits and pieces with promise, there doesn't seem to be an affordable hardware/software solution that weds the various text-to-speech technologies to an affordable control interface.

Anyway, It's interesting to see what approach the Intel engineers took but I was heading more in the direction of eye tracking and a simplified menu of premade words and phrases that could be organized/navigated contextually.

So thank you Sarah, and thank you Intel.

Hi Robert,

Thank you for sharing your story, I am sorry to hear you and your father went through this. I cannot imagine how hard that must have been. While I am not mute myself, I often see the effects that this has on people who are mute and their close friends and family. I recently talked to a young lady who used text-to-speech to say her wedding vows. It was such a lovely story and shows how speech technology really can change lives.

I am very excited to see how this technology will progress now that it is open source - hopefully we will soon have a solution that everyone can afford. Good luck with your project, I'm so glad to hear someone is working on this already!

In reply to by Robert Lambert (not verified)

Thanks.

Me too.
;^)

In reply to by sarahneo

> I was heading more in the direction of eye tracking and a simplified menu of premade words and phrases that could be organized/navigated contextually.

Robert, I think something I saw on Reddit this week is going to be right up your alley.

https://www.reddit.com/r/software/comments/3kdghp/eye_tracking_software…

TL;DR: OptiKey is open source software for Windows, designed to be used with commodity eye-tracking hardware which costs about $100. A whole bunch of Redditors immediately downloaded it and tried it out and said it worked quite well.

In reply to by Robert Lambert (not verified)

I've worked in this field of assistive technology for 15 years now and know that Stephen Hawking used a software title called EZ Keys that wasn't developed by Intel, but a company called Words+. The software was promoted by him as well and retailed at between £900 and £1,400 for the speech enabled version.

Anyway, EZ Keys became unavailable a few years ago and it was probably this that drove the need for Stephen to need Intel's help as he also used their hardware to run the EZ Keys software. The new open source software is pretty much an exact copy of EZ Keys, however, much of the non-speech side is disabled.

There are other software titles available that do a much better and comprehensible job than EZ Keys or its' new incarnation. Yes, they cost around £360, but they are superior and even though I believe having a voice is a human right, having companies charge this money enables them to put a lot of on-going development into their products and thus provide a quality product for a much needed section of society.

I would also like to add that if people were to provide open source, high quality communication products, then that would be ideal, but to date, they haven't.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.