Speech recognition and speech understanding technologies are powerful tools for everyone. But for many people with hearing, the implications of speech technology on the deaf community are not immediately obvious. In recognition of Deaf History Month, we wanted to take stock of the ways that speech technology is being used to serve the deaf and hard-of-hearing (HOH) community, as well as the places where more work is needed.

Who Are Deaf People?

According to the World Health Organization, over 5% of the world's population has significant hearing loss, defined by the WHO as "hearing loss greater than 35 decibels (dB) in the better hearing ear". That's roughly 432 million adults and 34 million children. Within the United States alone, the US National Institutes of Health estimates that there are about 28 million adults between the ages of 20 and 69 living with some degree of hearing loss, or about 14% of the population in that age group. Deaf and HOH people communicate in a variety of different ways, depending on the severity of hearing loss, the age at which hearing loss began, and the educational opportunities available to them. Some people with hearing loss may use spoken language with or without the help of hearing aids. Written communication is a useful tool for connecting across hearing-related language barriers.

Additionally, some use lip reading to improve their understanding of spoken language. But the means of communication that's the most familiar to people with hearing are signed languages, including American Sign Language, or ASL, in the US. An important linguistic note is in order before we get any further. Signed languages are full-fledged languages, and not just gestures or pantomime. They have complete grammars, rules for use, and communicate all of the same ideas, and at the same level of depth and clarity, that spoken languages do. Something that many hearing people don't realize is that there is quite a bit of diversity across sign languages around the world. There is no universal sign language. ASL is prevalent in the United States and Canada as well as about 20 other countries around the world.

ASL itself is a descendant of a French Sign Language (FSL) that developed in the mid 1700s and which laid the groundwork for many modern sign languages used throughout Europe and the Americas. These sign languages in the French family tree may not be mutually intelligible, but they resemble each other much more closely than they resemble other sign languages. For example, British Sign Language (BSL) developed independently of French Sign Language and is markedly different from it. BSL and its relatives are used in many former British colonies aside from the US and Canada.

As a result, a deaf person from the US and a deaf person from the UK could communicate in written English but would not not be able to communicate in their respective sign languages, whereas a ASL speaker might be able to use some signed communication with a speaker of French Sign Language, despite not being able to use the same written language. If you want to dive in and learn more about signed languages around the world, you can check out a map of global sign language families here.

Pandemic Challenges for the Deaf and Hard-of-Hearing Communities

The pandemic created additional challenges for people who are deaf or hard of hearing. The masking requirements that began in early 2020 created a sudden disruptive problem for people who relied on lip reading to communicate. The need to control the spread of an airborne disease while maintaining a line of sight to the speaker's lips led to the creation of see-through masks, but their adoption was limited. Here, automatic speech recognition can be part of the solution. We were excited to see a recent project from Zack Freedman. He created a hoodie with an embedded screen that displayed everything he said, transcribed by Deepgram.

Kevin Lewis, a Senior Developer Advocate at Deepgram, has developed a similar piece of wearable tech that uses Deepgram's speech recognition API to create real-time transcripts of spoken language in any of Deepgram's languages. These projects are not only useful for people with hearing loss, but also for anyone who has ever struggled to understand speech through a mask, or simply struggled to understand a conversation partner in a loud space.

Devices for in-person transcription are not the only way that the deaf and HOH communities can benefit from speech recognition. As the world has shifted more toward virtual business, these communities benefit from automating captioning on meeting platforms. Real-time automatic transcription is a major focus for Deepgram. For example, we offer highly accurate real-time transcription at low cost per audio hour and our product is easy to integrate with meetings platforms, such as Zoom.

The Future of Voice Technology for the Deaf

At Deepgram, we're always interested to see projects that attempt to convert sign language into written text. These projects have strong parallels to what we do, which is use deep neural networks to convert audio inputs of spoken language into written text-but in the case of signed languages, the inputs are visual patterns (as with an augmented reality app) or movement-based inputs (as with a wearable sensor system). Many such projects in the past have failed to account for the full range of expression that goes into sign languages, and American Sign Language in particular. Hearing people tend to over-focus on the more obvious movements of the hands and arms while missing crucial grammatical information conveyed by facial expressions-such as highlighting the intended topic of a statement by raising the eyebrows while the hand sign is made. ASL also has a notably different sentence structure than spoken English.

In fact, linguists do not consider ASL "a way of speaking English" at all. ASL is its own independent language with its own unique grammar and vocabulary, which means that converting ASL to written English actually requires translation rather than transcription. This is made even more challenging by the fact that there is no agreed-upon way of writing ASL. Although various systems have been created in the past, none have caught on with the deaf community, and aren't widely used outside of small academic circles.

Current Challenges for the Deaf Community

Despite efforts to develop technology to assist the deaf and HOH community, challenges remain. Many of these challenges stem from language barriers between deaf people and hearing people. These language barriers make it more difficult for deaf and HOH people to access education and employment, and as a result, can also impact access to health insurance and thus medical care-all of which lead in turn to higher rates of poverty. It's estimated that about 20% of working-age deaf and HOH adults in the US live in poverty, almost double the rate of adults with hearing.

And social challenges also exist for the deaf and HOH community. Without a shared means of communication with their peers-and, in some cases, even within their family-people who are deaf or hard of hearing can find themselves socially isolated. For anyone interested in learning more about issues that impact the deaf and HOH people, there are many organizations working to support and empower the community. A useful list of organizations can be found here.

Wrapping Up

If you're working on a project to transcribe speech for the deaf and HOH community and would like to learn more about how Deepgram can help, feel free to reach out, or sign up for a free API key to give the system a try on your own.

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.

Sign Up FreeBook a Demo