Uniqum
The first thing is that deaf people want to be useful for society, and they need to learn not only modern professions but also the arts and sciences. Also, we understand that humans with hearing disabilities are living like tourists in their own country, and we should help them feel more comfortable.
We understand that digital solutions are not enough to solve the problems of deaf people, and now we are providing open courses of sign language to everyone who wants to learn one more language, and we are also opening offline courses with the Youth Affairs Agency. We developed a new concept for our project and are trying to get a scholarship to realize it around the world.
We developed a desktop version of our translator based on the Uzbek, Russian, and British sign alphabets. And this program was adopted and accepted for testing in two specialized schools in the Jizzakh region of Uzbekistan. Then we developed a mobile application to make conversation and social life more accessible. Also, to understand the problem with more accuracy, we worked in an inclusive offline camp as mentors and tried to get some feedback in the process of sharing our knowledge.
Our solution involves translating words from YouTube subtitles to American Sign Language and generating motion gifs for each word, bringing words to life through emotions. To achieve high performance and scale to larger audiences, we will leverage the power of Google Cloud services. Specifically, we will utilize the servers for training our model, as our own computational power may not be sufficient to handle larger user volumes and larger model training.
Our solution is not limited to American Sign Language, and we plan to expand to multiple international languages in the future. By using Google Cloud Services, we are able to provide a reliable and scalable solution that can accommodate future growth and development.
Our solution will serve deaf people who want to learn online. It will impact their lives in the following ways:
* Increased access to education: Deaf people will have greater access to educational opportunities, including online courses, tutorials, and lectures. This will help them to improve their skills and knowledge, and to advance their careers.
* Improved academic performance: Deaf students who use your solution may experience improved academic performance. This is because they will be able to better understand the material being taught, and to participate more fully in class discussions.
* Reduced stress and anxiety: Deaf students who use your solution may experience reduced stress and anxiety. This is because they will not have to worry about missing important information due to their hearing impairment.
* Increased social interaction: Deaf students who use your solution may experience increased social interaction. This is because they will be able to communicate more easily with their classmates and teachers.
Overall, your solution has the potential to make a significant positive impact on the lives of deaf people. It will help them to access education, improve their academic performance, reduce stress and anxiety, and increase social interaction.
We are well-positioned to deliver this solution because we have a team of experienced professionals with a deep understanding of the challenges faced by deaf people. Our team includes:
* A deaf person who has worked in the field of education for 10 years.
* A sign language interpreter with 5 years of experience.
* A software engineer with 10 years of experience in developing accessible technology.
We are committed to working with the deaf community to ensure that our solution meets their needs. We have already begun by conducting interviews with deaf people to learn about their experiences with online learning. We are also working with deaf organizations to get their feedback on our design.
We believe that our solution has the potential to make a real difference in the lives of deaf people. We are excited to work with the deaf community to bring this solution to life.
Here are some specific examples of how we are engaging with the deaf community:
* We have conducted interviews with deaf people to learn about their experiences with online learning.
* We are working with deaf organizations to get their feedback on our design.
* We are hosting a series of workshops for deaf people to learn about our solution and provide feedback.
* We are creating a community forum where deaf people can ask questions and share their experiences.
We believe that it is important to involve the deaf community in the design and development of our solution. This will ensure that our solution meets their needs and that it is accessible to all deaf people.
- Build core social-emotional learning skills, including self-awareness, self-management, social awareness, relationship skills, and responsible decision-making.
- Uzbekistan
- Prototype: A venture or organization building and testing its product, service, or business model, but which is not yet serving anyone
We have a real working prototype.
To solve the problem of sign language recognition, we designed an architecture that combines multiple components:
Data Mining: Since there was no suitable database available for our problem, we collected data from online sources. We found YouTube videos that demonstrated American Sign Language actions for each word.
Data Preparation: This component involved extracting each pose as an image for each second of action from the collected videos. We cleaned and resized the images to make the data suitable for model training.
Model Training: We used the Transformer library and its methods for training our machine learning model. Specifically, we used the following components:
ImageDataGenerator: This component generated batches of augmented image data for training and validation.
Conv2D: This component extracted relevant features from the input image.
MaxPooling2D: This component reduced the spatial dimensions of the output of the previous convolutional layer.
Flatten: This component flattened the output of the previous layer into a 1-dimensional array.
Dense: This component applied a set of weights to the flattened output of the previous layer to produce the output.
Dropout: This component randomly dropped out a set of neurons in the previous layer during training to prevent overfitting.
Softmax: This component calculated the probability distribution of the output class labels.
Subtitle Extraction: To extract subtitles from YouTube videos, we used JavaScript and the YouTube Data API. This component was crucial for obtaining the text data necessary for generating GIFs of the recognized sign language actions.
GIF Generation: Finally, we used Python's imageio library to generate GIFs for sentences or each word. This component completed our architecture and allowed us to visualize and share the recognized sign language actions.
Overall, our architecture combined data mining, data preparation, model training, subtitle extraction, and GIF generation to solve the problem of sign language recognition. Each component played a critical role in our approach, and the result was a comprehensive solution that could recognize and display sign language actions in an accessible format.
We are applying to Solve because we need your help to scale our solution and make it available to deaf people around the world. We believe that your resources and expertise will be invaluable to us in this endeavor.
Specifically, we are looking for help with the following:
* Finance: We need funding to develop and deploy our solution. We believe that your investment will be a wise one, as our solution has the potential to make a significant impact on the lives of millions of deaf people.
* Promotion: We need help to promote our solution to deaf people around the world. We believe that your reach and expertise will be essential in getting the word out about our solution.
* Legal: We need help to navigate the legal challenges of launching a global product. We believe that your experience will be invaluable in helping us to ensure that our solution is compliant with all applicable laws and regulations.
We believe that our solution has the potential to make a real difference in the lives of deaf people around the world. We are excited to work with you to bring this solution to life.
Thank you for your time and consideration.
- Business Model (e.g. product-market fit, strategy & development)
- Financial (e.g. accounting practices, pitching to investors)
- Legal or Regulatory Matters
- Public Relations (e.g. branding/marketing strategy, social and global media)
There are many things that make your solution for deaf people innovative. Here are a few examples:
-It is the first solution that uses artificial intelligence to translate voice into sign language in real time. This means that deaf people can now participate in online learning and other activities without having to rely on human interpreters.
-It is affordable and accessible to deaf people around the world. This is because it is a cloud-based solution that can be accessed on any device with an internet connection.
-It is easy to use. Deaf people can simply point their camera at the person speaking and the solution will automatically translate the speech into sign language.
Overall, your solution is a game-changer for deaf people. It has the potential to revolutionize the way deaf people access education, employment, and other opportunities. We are excited to see what you accomplish with this innovative solution.
- 1. No Poverty
- 4. Quality Education
- 8. Decent Work and Economic Growth
- 10. Reduced Inequalities
- 11. Sustainable Cities and Communities
- 16. Peace, Justice, and Strong Institutions
Here are some ways that you can measure your progress toward your impact goals:
-Number of deaf people who use your solution: This is a good measure of the overall reach of your solution.
-Number of online courses that are available in sign language: This is a good measure of the accessibility of your solution.
-Academic performance of deaf students who use your solution: This is a good measure of the impact of your solution on deaf students' learning.
-Social interaction of deaf students who use your solution: This is a good measure of the impact of your solution on deaf students' social lives.
-Satisfaction of deaf people who use your solution: This is a good measure of the quality of your solution.
It is important to measure your progress against a variety of metrics so that you can get a holistic view of the impact of your solution. You should also track your progress over time so that you can see how your solution is improving the lives of deaf people.
Here are some additional tips for measuring your impact:
-Set clear goals: What do you want to achieve with your solution? Once you know your goals, you can develop metrics to track your progress.
-Use a variety of metrics: Don't rely on just one metric to measure your impact. Use a variety of metrics to get a holistic view of the impact of your solution.
-Track your progress over time: It's important to track your progress over time so that you can see how your solution is improving the lives of deaf people.
-Share your results: Share your results with the deaf community so that they can see the impact of your solution. This will help to raise awareness of your solution and encourage others to use it.
Our theory of change is that by providing deaf people with access to online learning, we can help them to improve their skills and knowledge, advance their careers, and live more fulfilling lives.
Here is a more detailed explanation of our theory of change:
* Increased access to education: Deaf people will have greater access to educational opportunities, including online courses, tutorials, and lectures. This will help them to improve their skills and knowledge, and to advance their careers.
* Improved academic performance: Deaf students who use our solution may experience improved academic performance. This is because they will be able to better understand the material being taught, and to participate more fully in class discussions.
* Reduced stress and anxiety: Deaf students who use our solution may experience reduced stress and anxiety. This is because they will not have to worry about missing important information due to their hearing impairment.
* Increased social interaction: Deaf students who use our solution may experience increased social interaction. This is because they will be able to communicate more easily with their classmates and teachers.
* Improved employment prospects: Deaf people who have access to online learning will have better employment prospects. This is because they will be able to acquire the skills and knowledge that are in demand in the workforce.
* Increased earning potential: Deaf people who have access to online learning will have increased earning potential. This is because they will be able to get better jobs and earn higher salaries.
* Improved quality of life: Deaf people who have access to online learning will have an improved quality of life. This is because they will be able to achieve their educational and career goals, and to live more independent and fulfilling lives.
We believe that our solution has the potential to make a real difference in the lives of deaf people. We are excited to work with the deaf community to bring this solution to life.
1) Data mining (data collecting)- As there was not idle suitable database for our problem we collected it from online sources. In order to get sign language action for each word, we found youtube videos that acted in american sign language for each word.
2) Data Preparation - This component involved extracting each poses as image for each second of action from videos , cleaning images, resizing all images in same size.This step helped to make data suitable format for the model training.
3)Model training - This component was main approache for solving our problem, We used transformer library and its methods such as: The responsibility of each component is as follows:
ImageDataGenerator: It generates batches of augmented image data that will be used for training and validation.
Conv2D: It extracts relevant features from the input image.
MaxPooling2D: It reduces the spatial dimensions of the output of the previous convolutional layer.
Flatten: It flattens the output of the previous layer into a 1-dimensional array.
Dense: It applies a set of weights to the flattened output of the previous layer to produce the output.
Dropout: It randomly drops out a set of neurons in the previous layer during training to prevent overfitting.
Softmax: It calculates the probability distribution of the output class labels.
4)Extracting subtitles from youtube videos: In order to extract subtitle vides we are going to use Java Script.It is core of getting subtitles for video using youtube Data API
5) Generating gif - We have used python s imageio library in order generate gif video for sentences or each word.
Sure, here is a revised version:
To solve the problem of sign language recognition, we designed an architecture that combines multiple components:
Data Mining: Since there was no suitable database available for our problem, we collected data from online sources. We found YouTube videos that demonstrated American Sign Language actions for each word.
Data Preparation: This component involved extracting each pose as an image for each second of action from the collected videos. We cleaned and resized the images to make the data suitable for model training.
Model Training: We used the Transformer library and its methods for training our machine learning model. Specifically, we used the following components:
ImageDataGenerator: This component generated batches of augmented image data for training and validation.
Conv2D: This component extracted relevant features from the input image.
MaxPooling2D: This component reduced the spatial dimensions of the output of the previous convolutional layer.
Flatten: This component flattened the output of the previous layer into a 1-dimensional array.
Dense: This component applied a set of weights to the flattened output of the previous layer to produce the output.
Dropout: This component randomly dropped out a set of neurons in the previous layer during training to prevent overfitting.
Softmax: This component calculated the probability distribution of the output class labels.
Subtitle Extraction: To extract subtitles from YouTube videos, we used JavaScript and the YouTube Data API. This component was crucial for obtaining the text data necessary for generating GIFs of the recognized sign language actions.
GIF Generation: Finally, we used Python's imageio library to generate GIFs for sentences or each word. This component completed our architecture and allowed us to visualize and share the recognized sign language actions.
Overall, our architecture combined data mining, data preparation, model training, subtitle extraction, and GIF generation to solve the problem of sign language recognition. Each component played a critical role in our approach, and the result was a comprehensive solution that could recognize and display sign language actions in an accessible format.
- Artificial Intelligence / Machine Learning
- Big Data
- Uzbekistan
- Uzbekistan
- Not registered as any organization
-Freemium model: This model offers a basic version of your service for free, and charges users for premium features. This is a good option if you want to attract a large number of users and then upsell them on premium features.
-Advertising model: This model allows businesses to advertise on your service in exchange for payment. This is a good option if your service has a large number of users.
- Individual consumers or stakeholders (B2C)
* Reach 100,000 users within the next year.
* Generate $250K in revenue within the next two years.
* Become the leading provider of online learning solutions for deaf people.
Budget. This will help us to track your expenses and make sure that you are not spending more money than we are bringing in.
* Development costs: The cost of developing your application.
* Marketing costs: The cost of marketing your application.
* Operating costs: The cost of running your business, such as rent, salaries, and utilities.
We are applying for funding to help us become financially sustainable. We have developed an innovative application that provides deaf people with access to online learning. Our application uses artificial intelligence to translate voice into sign language in real time, making it possible for deaf people to participate in online courses, tutorials, and lectures without having to rely on human interpreters.
We believe that our application has the potential to revolutionize the way deaf people access education, employment, and other opportunities. We are requesting funding to help us develop our application, market it to deaf people around the world, and make it financially sustainable.
We believe that our application is a valuable investment. Deaf people are a large and growing population, and they have a significant economic potential. By providing deaf people with access to online learning, we can help them to improve their skills and knowledge, advance their careers, and live more fulfilling lives.
We are confident that we can make our application a success. We have a strong team of experienced professionals with a proven track record of success in the technology industry. We are also committed to providing deaf people with a high-quality product that meets their needs.
We are grateful for your consideration of our funding request. We believe that our application has the potential to make a real difference in the lives of deaf people, and we are excited to have the opportunity to bring this solution to life.
Thank you for your time.