castme
Video lectures are present in abundance but the mocap data of those video lectures is 10 times ahead in the form of precise data. Despite the availability of such promising data, the problem of generating bone transforms from audio is extremely difficult, due in part to the technical challenge of mapping from a 1D signal to a 3D transform (translation, rotation, scale) float values, but also due to the fact that humans are extremely attuned to subtle details in expressing emotions; many previous attempts at simulating talking character have produced results that look uncanny( two company- neon, soul-machine). In addition to generating realistic results, this paper represents the first attempt to solve the audio speech to character bone transform prediction problem by analyzing a large corpus of mocap data of a single person.
What it does
Some of the cutting edge technologies like ML and DL have solved many problems of our society with far more better accuracy than an ideal human can ever do. We are using this tech to enhance our learning procedure in the education system.
The problem with every university student is, they have to pay a big amount of money for continuing to study at any college, they have to interact with the lecturers and professors to keep getting better and better. We are solving the problem of money.
Our solution to this problem is, we have created here an e-text data to human AR character sparse point mapping machine learning model to replace the professors and use our ai bots to teach the same thing in a far more intractable and intuitive way that can be ever dome with the professors. The students can learn even by themselves AR characters too.
This project explores the opportunities of AI, deep learning for character animation, and control. Over the last 2 years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed in Unity3D / Unreal Engine-4/ Tensorflow / Pytorch. This project enables using neural networks for animating character locomotion, face sparse point movements, and character-scene interactions with objects and the environment. Further advances on this project will continue to be added to this pipeline.
Can customers afford it?
- Individual
- Student
- Teacher
- Institute
- organization
- Increase the number of girls and young women participating in formal and informal learning and training
Animating characters can be an easy or difficult task - interacting with objects is one of the latter. In this project, we present the Neural State Machine, a data-driven deep learning framework for character-scene interactions. The difficulty in such animations is that they require complex planning of periodic as well as aperiodic movements to complete a given task. Creating them in a production-ready quality is not straightforward and often very time-consuming. Instead, our system can synthesize different movements and scene interactions from motion capture data and allows the user to seamlessly control the character in real-time from simple control commands.
- Prototype: A venture or organization building and testing its product, service, or business model
- A new application of an existing technology
This is all a completely new feature.
Animating characters can be an easy or difficult task - interacting with objects is one of the latter. In this project, we present the Neural State Machine, a data-driven deep learning framework for character-scene interactions. The difficulty in such animations is that they require complex planning of periodic as well as aperiodic movements to complete a given task. Creating them in a production-ready quality is not straightforward and often very time-consuming. Instead, our system can synthesize different movements and scene interactions from motion capture data and allows the user to seamlessly control the character in real-time from simple control commands. Since our model directly learns from the geometry, the motions can naturally adapt to variations in the scene. We show that our system can generate a large variety of movements, including locomotion, sitting on chairs, carrying boxes, opening doors and avoiding obstacles, all from a single model. The model is responsive, compact and scalable, and is the first of such frameworks to handle scene interaction tasks for data-driven character animation.
This project explores the opportunities of AI, deep learning for character animation. Over the last 2 years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed in Unity3D / Unreal Engine-4/ Tensorflow / Pytorch. This project enables using neural networks for animating character locomotion, face sparse point movements, and character-scene interactions with objects and the environment. Further advances on this project will continue to be added to this pipeline.
We ues these specific hardware and softwares:
1) Mocap suite- SmartSuite Pro from www.rokoko.com - single: $2,495 + Extra Textile- $395
2) GPU + CPU - $5,000
3) Office premise – $ 2,000
4) Data preprocessing
5) Prerequisite software licenses- Unity3D, Unreal Engine-4.24, Maya, Motionbuilder
6) Model Building
7) AWS Sagemaker and AWS Lambda inferencing
8) Database Management System
Demo video explains everything:
- Artificial Intelligence / Machine Learning
- Robotics and Drones
- Virtual Reality / Augmented Reality
The market will include the entire education industry, information sharing, and explaining and internet mobile users.
Is there enough demand?
Up to this time, no product like this has been launched in the whole of the world. These are the works of cutting edge technologies related to machine learning and deep learning. There is a lot of research that is still running on these topics. There's some kind of similar product that has been tried to create by Edureka but not in the efficient way that we are using here.
- Women & Girls
- Children & Adolescents
- Elderly
- Rural
- Peri-Urban
- Urban
- Poor
- Low-Income
- Middle-Income
- Minorities & Previously Excluded Populations
- Persons with Disabilities
- 4. Quality Education
- 5. Gender Equality
- United States
- India
Financial Projections for the next 5 years
Date
Clients
COGS-$
Revenue-$
Net Cash Flow-$
Gross Margin->#/b###
31/06/2020
4,368
3,500
14,884
11,384
76.484
28/02/2021
6,412
5,000
22,549
17,549
77.826
31/06/2021
16,560
8,000
55,523
47,523
85.591
28/02/2022
35,600
20,000
135,000
115,000
85.185
31/06/2022
90,000
50,000
325,000
275,000
84.615
28/02/2023
200,000
95,000
700,500
605,500
86.5
31/06/2023
350,000
150,000
1,600,650
1,450,650
90.628
28/02/2024
1,000,000
400,000
5,100,000
4,700,000
92.156
31/06/2024
4,000,000
1,000,000
21,000,000
20,000,000
95.238
28/02/2025
10,000,000
2,500,000
45,000,000
42,500,000
94.444
31/06/2025
35,000,000
6,500,000
102,000,000
95,500,000
93.137
28/02/2026
90,000,000
17,000,000
450,000,000
433,000,000
96.222
Financial Support and founders
We are constantly working on it.