AI-based solutions for Deaf people
There are more than 400 million people who only communicate using sign language. They desire equitable access to all aspects of civic life, including critical services that are frequently unavailable on-demand. While unemployment and underemployment continue to affect them disproportionately, those in the deaf community are relegated to the role of factory worker or hard laborer. About 8% of deaf or hard-of-hearing adults in the United States actively sought work in 2018, but fewer found it, with most only finding part-time positions—and only 39.5% employed full-time, compared to 57.5% of hearing peers. The same disparities exist in education. Despite the Americans with Disabilities Act (ADA) rules, normal schools and colleges are rarely structured in a way that allows Deaf and hard-of-hearing students to thrive, and only a few deaf and hard-of-hearing educational institutions exist. Only 18% of deaf or hard of hearing (DHOH) people hold a bachelor's degree or above, compared to 33 percent of working-age hearing individuals. These job and educational obstacles have a rippling effect. Deaf and hard-of-hearing people are already at a higher risk of depression and anxiety. However, data from psychologists and sociologists suggest that insufficient work is associated with a variety of mental health problems, as well as chronic diseases and substance abuse. Poverty has its own influence on health. According to studies, low-income people with minimal education are routinely less healthy than their more educated, richer colleagues, particularly among minority groups. Socioeconomic status and education levels have been connected to a variety of health consequences, ranging from low birth weight to diabetes. Because many of these issues are linked, resolving them will not be a straightforward legislative remedy. While many deaf and hard of hearing people get financial assistance from programs such as Social Security and Assistance and Disability Benefits Income, more may be done to promote equitable access to work, society, and education.
Factual information and numbers in ASL
- There is no universal sign language in the world.
- The linguistic structure of English differs from the linguistic structure of American Sign Language. As a result, English is not the first language of d/Deaf and hard of hearing. For that reason, closed captioning and subtitles are in English and are not fully functional to Deaf people.
- Sign Language has face movements and facial expression which are a way to convey emotion to Deaf people. This makes the technology development harder to understand the emotion from the text.
- Developing Large Language Model and NLP in ASL and make AI-based technology for interpretation is so sophisticated. Google, Microsoft, and Open AI (chatGPT) can't offer such services because only in ASL, there are many variables, localized signs, pop-culture signs. These companies (e.g. Google) do not have datasets of all signs.
For people who only communicate in sign language and seek equal access to all aspects of civil life, such as essential services, we train machines with artificial intelligence to do sign language in order to improve the experience of communicating with people who don’t know sign language and make the real world (society) and virtual world (digital world) more inclusive. We assist service providers, organizations, and businesses to be more accessible to Deaf people and to serve these people in sign language on demand.
Our mission is to serve deaf people so they have better experiences in public life and the digital world through AI-based solutions with an ultimate vision is to develop digital humans as sign language interpreters for real-time interpretation of voice into sign languages.
As our vision is to make the transition from human-based services to AI-based services in sign language by learning machines to translate the voice into sign language. Sign language would be affordable, available, and accessible to service users (deaf people) and service providers (any organizations, businesses, or companies that need to be more accessible). Picture this, you watch a video on YouTube, and instead of using a subtitle, there is another option. By clicking on this option, a virtual avatar pops up in the corner of the screen and translates your voice to sign language. We would diversify options for Deaf people to better access the digital world. Online learning platforms like Coursera, webinars, online meetings, and the workplace would also be a better place for deaf people as they could communicate with their colleagues, especially when working remotely.
Here in this section are the most unfair advantages of Deaf AI, which not only positions itself against competitors but also steps into some markets that were neglected by others, and these markets have always served deaf people.
When it comes to AI in ASL, the first solution is to make a huge dataset for the universal application of translating any sentence, which requires an enormous amount of time and money. However, sign language is so complex that it involves more than just the movement of hands and fingers. Facial expressions are an integral part of ASL and are used to convey emotions to Deaf people. Moreover, there are some localized signs that each deaf community has for itself. In order to understand the customer behavior of Deaf people, we should understand their culture, which has a direct bond between them.
Consequently, we at Deaf AI selected niche markets to align the limitations of technology with market demands, mitigate the risk, respect the Deaf culture, and create a few localized signs and facial expressions. The first market is airport terminals, followed by sports video games and live sports events.
Deaf AI is committed to improving the lives of Deaf individuals and the d/Deaf community by providing a solution that can enhance communication access in public spaces. Our target population is primarily Deaf and hard-of-hearing individuals who rely on American Sign Language (ASL) for communication. This group has traditionally been underserved due to a lack of accessibility options in public spaces, which can lead to social isolation, reduced access to information, and difficulties with daily activities such as navigating public transportation or attending medical appointments.
Our solution addresses these needs by providing a virtual avatar that can translate public announcements and other important information into real-time ASL. This technology aims to improve communication accessibility in public spaces, reducing the barriers that Deaf and hard-of-hearing individuals face. By making public spaces more accessible, we aim to increase social inclusion, provide equal access to information, and enable individuals to participate more fully in daily activities.
In addition to directly serving Deaf and hard-of-hearing individuals, our solution can also benefit the wider community by promoting accessibility and inclusion. Our technology can help create a more inclusive environment for all individuals, regardless of their communication needs, especially in the future when Deaf AI covers more sign languages than ASL. By increasing accessibility in public spaces, we aim to promote greater understanding and respect for Deaf culture and the importance of communication and equal access to essential information.
Overall, our solution has the potential to significantly impact the lives of Deaf and hard-of-hearing individuals by providing a tool that can enhance communication access and improve overall quality of life. By addressing the needs of this underserved population, we aim to promote greater accessibility and inclusion for all individuals in public spaces.
Our small team includes diverse people with disabilities from underrepresented communities and people of color with complementary skills, i.e., the minimum viable team; an entrepreneur, a seasoned manager, an AI scientist, a deaf AI/ML engineer, an ASL interpreter, and AI interns. However, Deaf AI went beyond forming a solid team because developing such sophisticated technology requires strategic partnerships with a group of people rather than individuals. This technology is a combination of ASL knowledge and d/Deaf culture, NLP, computer vision, Machine learning/Deep Learning, computer graphics, and animation and visual production.
This is the reason our product is "deep tech."
Therefore, we partner up with the Screen Industries Research and Training Centre (SIRT) to support Ontario’s film and television, startup, and interactive media clusters. SIRT is a government-funded Technology Access Center that includes dedicated staff, as well as faculty and students to conduct world-class applied research, training, and other industry support activities that enable the adoption of new technology.
SIRT supports Deaf AI in product development. Besides, Transport Canada is another partner for Deaf AI, committed to making airports more Deaf-friendly and improving accessibility for Deaf passengers at the airports.
We also work with a few ASL interpreting consultation firms to support us on quality controls and feedback on the product.
We are actively searching for connections with other NGOs and philanthropic foundations, such as the Ford Foundation, airlines, and local governments.
- Help learners acquire key civic skills and knowledge, including how to assess credibility of information, engage across differences, understand one’s own agency, and engage with issues of power, privilege, and injustice.
- Canada
- Pilot: An organization testing a product, service, or business model with a small number of users
Companies with social impact need a network of like-minded people, with beliefs about making the world a better place for everyone.
We are looking for a network of various stakeholders, such as local governments, impact investors, global leaders, and foundations. Raising money is the least of your concerns if you have a strong network of like-minded people who share your vision.
- Business Model (e.g. product-market fit, strategy & development)
- Financial (e.g. accounting practices, pitching to investors)
- Human Capital (e.g. sourcing talent, board development)
- Legal or Regulatory Matters
Deaf AI's solution is innovative in the way it approaches the communication gap between deaf individuals and hearing individuals, particularly in critical settings such as airports.
The lack of access to essential information causes other sad consequences: https://viewfromthewing.com/71-year-old-deaf-passenger-has-arm-broken-by-police-during-austin-airport-layover/
Our solution utilizes a combination of natural language processing (NLP), machine learning (ML), computer vision, and computer graphics to provide real-time translation of spoken announcements into American Sign Language (ASL).
Current solutions in the market often rely on human interpreters or pre-recorded videos for translation, which can be time-consuming and limit the availability of these services. In contrast, Deaf AI's solution provides on-demand translation services that are more efficient and accessible.
By providing real-time translation services in critical settings, our solution enables deaf individuals to be more independent and self-reliant, which can lead to increased accessibility and inclusion in society. Our solution has the potential to catalyze broader positive impacts in the field of accessibility and inclusion. By providing on-demand and efficient translation services, we can promote greater accessibility for deaf individuals in a variety of settings, from public transportation to educational institutions.
Moreover, our solution has the potential to change the market by setting a new standard for accessibility and inclusivity. As more organizations and businesses begin to recognize the importance of providing accessible services, the demand for technologies like Deaf AI's solution will likely increase.
Furthermore, Deaf AI's solution is unique in its use of hyper-realistic avatars for sign language interpretation. This technology allows for a more personalized and relatable experience for deaf individuals, as the avatar can be customized to resemble the user and convey emotion and tone through facial expressions and body language.
In conclusion, Deaf AI's solution is innovative in its use of NLP, ML, computer vision, and computer graphics to provide real-time translation of spoken announcements into ASL. Our solution addresses the communication gap between deaf and hearing individuals, promotes greater accessibility and inclusion, and has the potential to catalyze broader positive impacts in the field.
Our next year's impact goal is to provide faster and smoother accessibility for Deaf and hard-of-hearing individuals in airport terminals. To achieve this, we plan to partner with airports, local governments, and airlines to implement our technology and make it available to their passengers. Our approach will involve working with these stakeholders to understand their needs and adapting our solution to fit their specific requirements. We will also work to promote awareness of our solution among the Deaf and hard-of-hearing community, building relationships with organizations that advocate for their rights and needs.
Over the next five years, our impact goals will be to expand our services globally, starting with airport terminals and gradually stepping into other public buildings. We aim to have our technology installed in major airports across the world, enabling Deaf and hard-of-hearing individuals to access the same services as everyone else, without any delays or inconvenience. To achieve this, we plan to build strategic partnerships with airport authorities, airlines, and local governments, and work with them to ensure that our solution meets their unique needs.
As we expand our services to other public buildings, we will continue to work closely with local authorities and communities to ensure that our technology is accessible and user-friendly. Our goal is to create a more inclusive world, where Deaf and hard-of-hearing individuals can access the same services and opportunities as everyone else, without any barriers. We will achieve this by continuing to innovate and improve our technology, and by building strong partnerships with organizations that share our vision.
- 3. Good Health and Well-being
- 8. Decent Work and Economic Growth
- 9. Industry, Innovation, and Infrastructure
- 10. Reduced Inequalities
- 11. Sustainable Cities and Communities
Deaf AI's impact goals are aligned with several SDGs, and we use various indicators to measure our progress towards these goals. Here are a few specific indicators that we use to track our progress:
SDG 3: Good Health and Well-being - Our product aims to improve the mental health and well-being of the Deaf community by providing equal access to information. We measure our progress by monitoring the usage of our technology and gathering feedback from our users on how it has impacted their well-being.
SDG 4: Quality Education - Deaf AI is exploring ways to expand our technology to serve the education sector. We will measure our progress by tracking the number of schools and educational institutions that implement our technology and monitoring the academic performance of Deaf students who use our product.
SDG 8: Decent Work and Economic Growth - We aim to partner with airports, local governments, and airlines to improve accessibility for the Deaf community. We will measure our progress by monitoring the number of partnerships we establish, the percentage of airports that adopt our technology, and the economic impact of our partnerships on the Deaf community.
SDG 9: Industry, Innovation, and Infrastructure - Deaf AI's technology can be used to improve accessibility in other public buildings beyond airports. We will measure our progress by tracking the expansion of our technology to other public buildings and the percentage of these buildings that adopt our technology.
SDG 10: Reduced Inequalities - Our product aims to address the inequalities faced by the Deaf community. We will measure our progress by gathering feedback from our users on how our technology has impacted their sense of belonging and access to information, as well as tracking the number of partnerships we establish to improve accessibility for the Deaf community.
SDG 11: Sustainable Cities and Communities - Deaf AI's product and innovation aim to build resilient infrastructure through accessibility. We will measure our progress by monitoring the adoption of our technology in public buildings, the economic impact of our partnerships, and the impact on the sustainability and inclusivity of communities.
In addition to these specific indicators, we also gather feedback from our users through surveys and focus groups to better understand their needs and how we can improve our product. We also track the number of partnerships and collaborations we establish and the impact of our technology on the Deaf community.
Deaf AI's theory of change is based on the idea that by developing and implementing accessible technology, we can break down the barriers and inequalities faced by the Deaf community. We believe that providing Deaf individuals with equal access to information, education, and employment opportunities will allow them to fully participate in society, contribute to their communities, and reach their full potential.
Our activities include developing and refining our Deaf AI platform, partnering with organizations and institutions to implement our technology, and providing training and support to users. Our immediate outputs include increased access to information and communication tools, improved educational opportunities, and increased social interaction opportunities for Deaf individuals. These outputs will lead to longer-term outcomes such as improved health outcomes, increased economic stability, and stronger communities.
We expect our solution to have a positive impact on the Deaf community because it directly addresses the barriers and inequalities faced by Deaf individuals. By providing equal access to information and communication, we can help to break down the social and economic barriers that prevent Deaf individuals from fully participating in society. Additionally, by partnering with organizations and institutions, we can leverage our impact and reach a larger audience.
We will continually evaluate our theory of change through user feedback, third-party research, and data analysis to ensure that we are effectively addressing the needs of the Deaf community and creating a positive impact. We are committed to improving our technology and approach as we learn and grow in our mission to create a more accessible and inclusive world for the Deaf community.
The technical stack in simple terms for Deaf AI involves a combination of NLP, ML, computer vision, and computer graphics. The product development roadmap involves several steps. Firstly, the airports provide the team with flight announcement sheets and a list of departure cities. Then, the team prepares the first class of datasets, which are words and sentences that need to be translated into ASL.
To prepare the second class of datasets, a sign language interpreter wears a motion capture suit in Sheridan's Screen Industries Research and Training (SIRT) Center. Advanced cameras record and capture the body motions correlated with the sentences in the datasets. These video recordings contain accurate body motions of the sign language interpreters, and they are used for training the AI model.
Training the AI model requires computer vision for motion detection and body figure detection. The AI model is trained with high accuracy using video recordings of the sign language interpreter's body motions. After training, the model is deployed into a hyper-realistic avatar, which requires computer graphics and machine learning knowledge.
Overall, the technical stack for Deaf AI involves a complex combination of technologies that are necessary to create a seamless experience for Deaf people at public places such as airports. The use of motion capture technology, computer vision, and machine learning helps to create an accurate and realistic representation of ASL that can be easily understood by Deaf individuals.
The gate operator chooses the right flight announcements on our SaaS, which gets flight numbers and gate numbers from the FIDS system, and then the gate operator runs the output, which is the rendering of digital human translation of the announcements (input). We have made significant progress.
All of these steps empower the innovation of Deaf AI to develop equal access to essential information for Deaf individuals.
- A new application of an existing technology
- Artificial Intelligence / Machine Learning
- Software and Mobile Applications
- Canada
- United States
- Canada
- United States
- For-profit, including B-Corp or similar models
Deaf AI is committed to diversity, equity, and inclusivity in all aspects of our work. Our team is composed of individuals from diverse backgrounds and cultures, including minority groups, people of color with disabilities, and we are a team with varying levels of experience, education, and expertise. We believe that having a diverse team helps us better understand and serve our diverse community of Deaf and hard-of-hearing individuals.
We recognize that there is always room for improvement, and we are actively working to increase diversity, equity, and inclusivity within our team and in our work regarding Deaf culture. To achieve this, we have taken the following actions:
- We have implemented blind hiring practices to eliminate bias in the hiring process, with a slight preference for those from d/Deaf communities.
- We actively seek out partnerships and collaborations with organizations that serve underrepresented communities, especially Deaf social bodies.
- We regularly conduct training and education sessions for our team to promote Deaf cultural competency and understanding.
- We work with Deaf consultants to ensure that our products and services are accessible and inclusive for the Deaf community.
- We actively seek out feedback from our users and community members to ensure that we are meeting their needs and addressing any concerns or issues related to diversity, equity, and inclusion.
We are committed to continually improving our practices and creating a more diverse, equitable, and inclusive environment for our team and community.
Deaf AI's business model is centered around providing accessible and inclusive technology solutions to the Deaf and hard-of-hearing community. We offer a suite of products and services that are designed to improve communication and accessibility for our users. These include our Deaf AI virtual assistant, which uses computer vision and machine learning to translate spoken language into American Sign Language in real-time, as well as our ASL animation and graphics services, which help businesses and organizations create more inclusive content.
Our revenue streams come from a combination of direct sales, subscriptions, and partnerships. We offer our Deaf AI virtual assistant as a subscription-based service, with different pricing tiers based on the needs of airports.
All in all, what Deaf AI offers to organizations and businesses is not what they want; it is what they need due to regulations. We leverage regulations into our communication with businesses not only to inform them about their compliance with regulations but also because there are some benefits for them beyond social benefits, such as accessibility tax credits.
- Organizations (B2B)
In general, we do not charge deaf people for most of our products; in fact, we are B2B and charge businesses because we help service providers and businesses become more accessible to deaf people. Our technology will be available to businesses in the form of software as a service (SaaS) with a subscription business model.
We are making AI-based deep tech solutions, all of which will be a form of software, SaaS, and APIs for service providers, businesses, and organizations such as airport terminals. We provide airports with ASL translations of flight announcements to ensure equal access to information, which not only meets regulatory requirements but also has a social impact and has responsibilities for airports. In the airport, whether there is a Deaf passenger or not, for each flight announcement, the avatar appears on the screens for ASL translation, because Deaf people hardly disclose their disability and it is a part of their culture. Airports pay us through subscriptions for our software. At this time, we have a few airports who are our partners for the pilot study.