KOSA AI - future proofing AI
In today’s economy, AI powers thousands of decision-making tools and within the healthcare setting itself, the global AI healthcare market has a CAGR of 46.1%, growing to $95.65 billion in 2028. AI in healthcare has the potential to drastically improve patient outcomes and healthcare systems, yet it is biased against historically underserved groups such as people of color and other minority groups, who are often misunderstood and underestimated. These biases can be found in AI applications across health systems, from clinical documentation to actively diagnosing diseases, leading to inequitable treatment for patients/customers and inefficiencies in healthcare services and healthcare system processes. The AI bias in health systems may amount to approximately $93 billion in excess medical care costs and $42 billion in lost productivity per year.
KOSA AI is developing use cases by working on detecting and mitigating the bias within some of the most accelerated AI applications in digital healthcare. One of these use cases includes skincare diagnostics, where AI bias is diminishing the diagnostic accuracy of detecting melanoma through medical imaging. Across all malignant images, 62% have been found to be male patients because malignant image scan locations differ based on the patient’s gender. The torso is the most common location in males and the lower extremities are the most common location in females; this biased imaging algorithm focuses on larger surface areas and thus generating a higher proportion of male malignant patients than women.
Across healthcare settings, bias is particularly pervasive. For instance, UnitedHealth's Optum AI system showed drastic AI racial bias to more than 200 million people in the US, denying care to 46% of qualifying black patients due to the inaccurate assumption that those who incur the higher costs need crucial care most. Black patients spend less on medical costs per year than white patients, leading AI to make biased decisions. But when the AI authors switched from cost to a number of comorbid conditions for the outcome of the system, the bias was erased using the exact same model and more people were served with needed care.
Bias is a concern in all areas across the healthcare ecosystem through (1) the diversity of datasets and (2) imputed unconscious biases, deploying AI algorithms that are biased by design. KOSA AI specifically solves the latter, addressing the ethical issue before further implementation of algorithms, ensuring AI does not fail to deliver benefits to patients and increase health inequalities.
KOSA AI helps organizations improve their healthcare services and outcomes by mitigating biases present in AI decision-making tools. By auditing, explaining and monitoring bias and risks throughout the machine learning lifecycle, our AI governance software solution supports organizations to generate more inclusive and people-centered outcomes, ensuring patient safety, affordability, accessibility and equitability for all patients/customers. Our proprietary AI tech also incorporates human-in-the-loop and ethics-by-design principles. Benefits to businesses include identifying missed revenue opportunities through increased customer trust, increased efficiency by 34%, enhanced product development and services by 27%, increased regulatory compliance and reduced costly litigation risks and more by 26%.
Our automated solution seamlessly integrates into an organization's existing AI infrastructure. Our product comprises 4 steps that provide support across the whole ML lifecycle: (1) it assesses and mitigates the biases in current ML processes; (2) it audits the AI model to assess the human impact and automate compliance checks; (3) it explains the blackbox model behaviors; (4) it builds an additional monitor module to track drifts or malfunctions and allow developers to fix vulnerabilities. Building responsible AI is a team effort; therefore we have developed tools for all stakeholders, from the executive team to the developers. Our software outputs both an evaluation for the technical development team to understand biases within their systems and a quantifiable financial assessment for non-technical stakeholders to grasp the missed opportunities for the business.
KOSA AI’s mission is to make technology accessible and inclusive of all people. We work to support SDG 10 to reduce inequality. By 2030 we strive to empower and promote the social, economic and political inclusion of all, irrespective of age, sex, disability, race, ethnicity, origin, religion and economic or other status. As we target companies using AI at scale, we aim to serve the global digital health systems and traditionally underserved groups who have been denied equitable access to health services, equipment and medicines in particular.
We work with large organizations that either research or implement intelligent systems that impact billions of people across the globe. For instance, using the use case of skincare diagnostics, we conducted a proof of concept by automatically neutralizing subjective bias in the classification problem used to predict skin cancer. We built an XGBOOST algorithm as an ensemble learning, combining several base models in order to produce one optimal predictive model. We then performed the Reweighting bias mitigation method, effective in minimizing bias, in KOSA's Automated Responsible AI System (ARAIS) and passed it back through the XGBOOST algorithm. This resulted in reduced bias in the training dataset and improved accuracy of cancer detection amongst male and female populations. If implemented at scale, this technique ensures safe and accurate diagnostics of over 100,000 patients in the US alone. And this is just one out of hundreds of use cases for our software.
Furthermore, we work with a number of research institutions such as the University of Massachusetts Amherst (UMASS), Africa Health Business (AHB), Institute of Electrical and Electronics Engineers (IEEE), Porsche and the Linux Foundation to increase knowledge and awareness of AI bias mitigation and ethical bias; we want to make sure that AI-driven inequalities are better understood to develop solutions that respond to the needs of those impacted.
In efforts to speed up the reduction of AI inequality, be it with the companies KOSA AI works with directly or with our target beneficiaries, KOSA AI is building an AI Academy. Here we provide course material and conduct workshops that are used to educate and create awareness on specific research topics such as ethical AI, AI governance, AI fairness definition, bias within facial recognition software, other computer vision applications and more. Through this platform, we endeavor for our target population to have access to improved quality data, help shape a supportive digital policy environment that enables data sharing whilst protecting privacy and security, and define use cases where AI has the largest potential for impact. One of the key locations for the AI Academy’s deployment is in Africa.
KOSA AI has a prominent presence in Africa and here we leverage our distributed model to improve our software and increase our impact. Our network on this continent favors (1) access to diverse training data sets which we can share with companies outside of the continent who require more representative datasets; and (2) build a data bank with more diverse data sets; and (3) increase the diversity of our team by mobilizing African talent. In Africa, we see an opportunity to directly grow KOSA AI’s vision and create tangible impact, especially for rural communities and historically underserved groups. Specifically in Kenya, where the emerging technology sector is rapidly growing into a consequential share of national GDP, the importance of AI governance within the tech ecosystem is crucial. KOSA AI’s AI academy, research partnerships with organizations such as the Technical University of Kenya (TUK), and other business efforts will reduce inequalities in the digital workforce by empowering minority groups and the underserved population through education, awareness and training on responsible AI and AI bias.
KOSA AI was founded by entrepreneurs passionate about reducing inequalities exacerbated by technology. With multidisciplinary skills across software development, data science, business, and marketing, our full-time team of 7 has the technical and commercial expertise to fully realize the impact of our solution.
Both founders have come to truly understand the granular problems of AI bias through not only the sheer passion they both have to fix this, but because of both their diverse backgrounds; both female with native talents from Asia, Africa, and Europe. Moreover, KOSA AI’s team comes from all corners of the world, spanning from Ethiopia to South Korea. They have all first handedly experienced bias at some point through their personal and professional lives and henceforth, together we are building a platform that addresses each issue around responsible AI.
Key profiles below:
Co-Founder/CEO Layla Li is a Boston University and Harvard University graduate with 7+ years of experience building technology solutions at companies including Tesla and Philip. Layla has built strong expertise in AI bias during her time developing automated decision-making systems for multiple international organizations.
Sonali Sanghrajka, Co-Founder/Chief Commercial Officer, has 10+ years experience in the Healthcare sector, driving brand and commercial strategies for products worth $500 million. Sonali has worked with patients who have been directly affected by bias through care delivery. And through her consulting services, she has been privy to the challenges that AI companies have in penetrating the African continent with their AI product/solution offerings.
- Identify, monitor, and reduce bias in healthcare systems, including in medical research and at the point of care
- Pilot
We believe the Solve’s mission to build equitable health systems for historically underserved groups directly aligns with our own mission to create ethical and responsible AI for all. Therefore, we seek help from Solve to support the following:
Scaling and partnering: As we launch our MVP in Europe, we are looking for organizations to support us and partner with us to pilot our product across the region. We are looking for companies predominantly in Healthcare and Lifesciences. We would also welcome the opportunity to work with academic and research institutions to further increase the reach of our product.
Grant funding: We welcome grants to enable collaborations with academic institutions and fund research projects that advance the field of ethical AI and bias impact. This is specific to the feedback loop to organizations and companies in Europe and the US that will benefit from our efforts in Africa.
Impact measurement: We are seeking support to develop an impact measurement framework to better understand the needs of our target beneficiaries. We have focused on the development of our product to ensure maximum usability and results for our customers; however, we would like to develop a system that enables us to better study our target beneficiaries and integrate their needs into our product development processes.
Networking and mentorship: We welcome the opportunity to network with institutions and individuals that share the same vision of reducing bias and creating more responsible and trustworthy AI.
- Monitoring & Evaluation (e.g. collecting/using data, measuring impact)
KOSA AI is the world's first SaaS solution to AI bias auditing and mitigation. Today, most organizations that use AI and in decision-making tools don’t conduct algorithmic auditing and monitoring for potential biases, primarily due to four barriers: (1) the misconception that auditing is expensive and an obstacle, (2) the lack of shared principles on practices of ethical AI, (3) the lack of buy-in from non-technical stakeholders. AI managers acknowledge that responsible AI is a priority for them, but are often held back because they lack the right tools and (4) most organizations only tackle the biases either in the pre-processing or post-processing stage.
Our solution addresses precisely these found barriers. First, it is half as expensive and 10x more efficient than current alternatives on the market. Second, the research conducted with our partners ensures the latest ethical definitions and fairness strategies are encapsulated in KOSA AI’s design, reinforcing the inclusivity of underserved groups and ethics-by-design principles. Third, it targets every AI and ML stakeholder across the organization with a set of relevant tools that are didactic and easy to use, increasing understanding of biases in data and detailing the financial benefits of correcting the biases. Lastly, KOSA AI’s ARAIS addresses the bias problem at every step of the process.
Healthcare systems are complex with many interdependencies, augmenting the biases within the entire ecosystem, leading to profound consequences which could lead to the life and death of a patient. KOSA AI plays a vital role in ensuring these biases are significantly reduced by stopping the trickle down effect.
Furthermore, our ties in Africa give us a significant advantage over our competitors. We can access diverse training data sets which we can share with companies outside of the continent who require more representative datasets, and our partnerships with institutions, such as AHB, from the continent, which are often negatively impacted and neglected by technology by major markets, represents a huge future opportunity for KOSA AI to have a decisive role in influencing the next AI frontier.
We work to achieve SDG 10 reducing inequalities, especially SDG 10.2 empowering and promoting the social, economic and political inclusion of all and our mission is to make technology more inclusive of all races, genders, and ages.
There are three main activities that KOSA AI is investing in to ensure a transformational impact on the millions of people that are unfairly underserved.
We have developed a software solution aimed at reducing gender, racial and ethnicity-based inequalities caused by biases in enterprise AI-powered decision-making tools. We leverage the business opportunity of de-biasing AI and ML for companies so that they can generate more inclusive and people-centered outcomes, ensuring patient safety, affordability, accessibility and equitability for all patients/customers; driving the deployment of our technology and accelerating the reduction of inequalities for millions of individuals.
The extensive research carried out with our partners, academic institutions and organizations listed above, on ethical definitions and fairness strategies, is not only encapsulated in KOSA AI’s product and service design but also generates increased awareness around AI bias detection and mitigation.
Our established presence in Africa through the AI academy, research partnerships with organizations such as AHB, TUK and other business efforts, such as collection and sharing of more representative data sets, will further enhance our vision to create equitable, accessible and inclusive AI for the world, specifically impacting minority groups and the historically underserved population. Leveraging our ties in Africa represents a huge future opportunity for KOSA AI to have a decisive role in influencing this next AI frontier.
Currently, 2022 key objectives and results we are following or benchmarking against are:
Product: 80% feature adoption to ensure we are building a product that companies will benefit from.
Product: 80% task success rate to ensure the product is easy to use.
Product: 80% success rate on bias mitigation in-house beta testing with use case application.
The above score is calculated based on a mix of fairness evaluation, compliance score and impact assessment based on the industry use case and location. This score is built into our system so we can ensure equitable healthcare.
Additionally, through our model monitoring services, we can directly calculate the number of people impacted through our services, and we are aiming to serve 0.5 million people this year.
Marketing: 10 customer lead generations from our active content marketing and brand awareness.
Marketing: 500 active followers on our social media platforms.
Marketing: 500 website visits through marketing efforts. We are already at 375 visits.
Sales: 80% success rate of customer traction from email reach (cold and warm introductions)
Customer: NPS > 6 from customer interviews.
Competition: Market differentiation from our competitors to ensure we are building a unique value proposition.
Team: eNPS > 7 from employees to ensure we are creating a high performing, sustainable team.
Team: Score over 4/5 from employee onboarding experience.
Finance: Monthly burn rate is currently 20,000USD. Being a software solution, we are lean by definition with a low OPEX.
In the long run, we would like to measure the following:
Number of people indirectly impacted i.e. our target beneficiaries (we are currently in the process of determining a viable process to do so)
Monthly active sales per use case/per industry vertical
Economic activity e.g. liquidity ratios
Value determined from key partnerships e.g. percentage of useful research that can be (1) incorporated into the product solution and (2) impact-driven on the target population i.e. minority groups
Our mission is to make technology more inclusive of all races, genders, and ages. If we equip companies with an affordable, multi-stakeholder solution that facilitates the identification, rectification, and monitoring of biases inherent to AI and ML processes, we can enable historically underserved groups to access (1) financial services, healthcare and education that they previously struggled to obtain; (2) equitable, affordable healthcare services; and (3) a fairer justice and policing system.
We conducted interviews with more than 100 AI stakeholders, ranging from Head of AI Business Development managers to Data scientists to Heads of Risk Compliance, across 10 organisations and observed that many data engineering managers found it difficult to obtain buy-in from leadership due to a failure in understanding the business value of responsible AI. As a result, we believe that if we can showcase the ROI that companies stand to gain, we can accelerate the deployment of AI bias identification and mitigation technology that will change the lives of millions of people.
Our theory of change logic framework is outlined below:
Activities/Inputs: We deliver our zero-integration and fully automated algorithmic auditing solution to companies and organizations across 5 sectors: Healthcare and Lifesciences, Banking, Financial Services & Insurance (BFSI), Public, Technology and Services and Education.
Outputs: Companies perform AI bias auditing and mitigation, therefore improving their decision-making tools and increasing diagnostic accuracy of detecting melanoma through medical imaging, encapsulating both male and female populations. In addition, AI bias is exposed as a recurrent problem to the wider public to increase awareness.
Outcomes: Decision-making tools are improved and biases are reduced. This enables our target beneficiaries equitable access to better healthcare facilities and services (as well as financial services, and education) while experiencing a fairer justice and policing system. In addition, an increasing share of companies sees the business value of investing in responsible AI, further increasing the impact.
Impact: In the long term, bias-free AI enables fair decision making that liberates and empowers underserved communities, reducing the inequalities and injustices experienced by traditionally marginalised groups.
On the backend, our tech stack is Python, Flask, Postman, AWS, Vue.js. We build a web application that connects to customer’s data warehousing solutions (GCP, AWS, etc.) and AI development platform (Sagemaker, Watson, etc.) through APIs, then we perform data and model evaluations through custom algorithms on available data and models.
On the front end, our platform features two toolkits, one for technical users, the other for non-technical users. (1) The developer toolkit allows the software engineer to connect KOSA AI’s algorithm services to the company’s environment and select the desired fairness definition to evaluate AI bias in training dataset and models; they can adjust metrics based on relevant use cases and select the mitigation strategy desired. They can also use the tool to explain and monitor model impact and make adjustments in real-time. (2) The non-technical dashboard enables stakeholders with all levels of technical literacy to participate in setting the company’s ethical AI strategy, understanding the impact of their intelligent systems, and measuring as well as tracking all projects progress.
- A new business model or process that relies on technology to be successful
- Artificial Intelligence / Machine Learning
- Big Data
- Software and Mobile Applications
- 5. Gender Equality
- 10. Reduced Inequalities
- Kenya
- Netherlands
- United States
- Kenya
- Netherlands
- United States
- For-profit, including B-Corp or similar models
At KOSA AI, we embrace diversity -in fact, diversity, equity, and inclusion are all essential values and key pillars of our vision, mission, and strategy. Firstly, KOSA AI’s vision is to create equitable, accessible and inclusive AI for the world, specifically impacting minority groups and the historically underserved population. Secondly, just as our software has no geographical borders, so does our team: our co-founders are women from diverse backgrounds, Layla is Chinese, Sonali is Indian-Kenyan and the team itself comes from Korea, India, Ethiopia, Kenya, USA and Greece. We believe that this diversity is essential for us to achieve our mission of reducing technology-fueled inequalities.
In addition, our presence in Africa and our geographical remote working culture empowers us to actively recruit employees from every continent, which further increases the diversity of our team and enables holistic and dynamic input and outcomes.
At KOSA AI, we believe that everyone should have access to the same services, products, and jobs, irrespective of race, gender, age or other status. Large companies and government agencies play an important role in ensuring this access, which is why we have developed our automated responsible AI system.
Our algorithmic auditing software enables organizations, in regulated sectors, such as Healthcare, BSFI and the Public sector, to build trust within their AI system, as the wide use of AI decision-making technology fuels inequality for people of color, women and other minority groups. By subscribing to our product, these organizations can make their technology-powered decisions more responsible, while improving outcomes for millions of traditionally marginalized and historically underserved groups.
ARAIS enables companies to identify and mitigate the biases present in their AI environment, check for compliance, and ensure that safeguards are in place if the model starts to drift or malfunction. We offer ARAIS to customers on a usage-based monthly subscription model and then charge USD 6250/month. Many development teams are aware of the biases that power their AI and ML models; however, they confess that obtaining buy-in from their leadership teams is often cumbersome. As a result, we have developed a solution that targets multiple stakeholders and enables executives to grasp the ROI that companies stand to gain from de-biasing their AI.
- Organizations (B2B)
We plan to reach financial sustainability through a combination of investment capital: venture capitals (VC) and angel syndicates, grants, and product sales to private enterprises and government agencies. Our current operations are largely sustained by the pre-seed capital we have recently raised - 350,000 USD. In the near future, we aim to raise a seed round and in the long term, our main source of revenue will be generated from our subscription model priced at $6,250/month. This pricing will allow us to make $12 in return for every $1 spent on our customer, which translates to a 65% contribution margin, considering that our largest cost falls under product development and is not directly associated with our customers. This fee was calculated based on (1) the average size of companies we are targeting (large enterprises with more than 1,000 employees), (2) average data usage and the number of active decision-making models, and (3) the average price organisations are willing to spend on AI governance.
With our upcoming MVP launch with our pilot customers, we expect to secure $0.4m in contract value after completion. A full product launch is planned in mid-2022 with additional recruitment of an enterprise sales team to drive enterprise sales. Through built-up traction, a reputable product and well-established partnerships, we are hoping to acquire ~600 clients and realise ~$50 million in revenue by our 5th year.
In mid-2021 we had raised a pre-seed capital of 350,000 USD from VCs: Echo VC in Africa and APX in Germany, and angel syndicates in North America. We expect this funding to last us up to Q3 this year, which will cover our full MVP launch, beta development, our CTO hire and any other legal and administrative expenses.

Co-Founder & CCO