Project Domino
From masking to vaccination, people everywhere want to stop medical misinformation. This is especially true of digital platforms, civic groups, and researchers. However, the status quo is broken: anti-misinformation tech is inaccessible to nearly everyone. It should instead be as ubiquitous as the spam filters automatically used by billions of people every day. Project Domino is building an open misinformation AI ecosystem to flip the community from fragmented competition to open collaboration.
Our state-of-the-art tech listens, thinks, and acts. It monitors social media from around the world to build a real-time graph of over 500M+ data points on who is promoting what. Graph & deep learning algorithms automatically detect misinformation attacks. APIs and tools empower stakeholders to act.
Openness unlocks scale. Volunteers are joining from around the world, and our investigators are breaking national news. For global reach, we are adding self-serve capabilities for at-risk communities, investigators, and popular platforms.
Project Domino prevents medical misinformation spread and radicalization. Our immediate focus is open AI for COVID interventions. Most digital platforms, investigators, and organizations already rely on AI services for tasks like preventing phishing. Disastrously, they were unprepared for medical misinformation.
The US has been especially hard-hit by COVID, with over 500,000 people killed. Despite that, 37% of adults remain vaccine hesitant. This is far from America’s 90+% vaccination rate for polio and measles. Variants of the vaccine fight are now breaking out around the world.
Promisingly, social media scales. Half the world uses it, and that number has been increasing 10% every year. Our prototype uses Twitter, enabling us to directly analyze 20% of the US adult population: we are already working with 500 million data points collected over the last year. Platforms like Facebook will multiply that reach.
Our emphasis on openness matters. Scaling any solution is difficult, so the security community’s global success for malware and spam is significant.
Finally, we note Project Domino is already helping beyond COVID. As two examples, we are mapping misinformation in the cancer community, and helped reveal voter suppression against African Americans. We are building foundations for open AI against misinformation.
Project Domino resembles how the security community defeats most spam and malware:
Share. Just like how platforms and organizations detect spam and malware at scale by sharing threat intelligence and tools, we help share misinformation intelligence and tools. The status quo for misinformation is not sharing: that doesn’t scale.
Ecosystem. We act through empowered platforms, investigators, and groups.
Specialize. To directly aid civic groups like ePatient cancer communities, we are streamlining tools they can directly use and share. Likewise, we are making investigator tools that combine digital forensics capabilities with the nuances of social network analysis and natural language processing.
AI that listens, thinks and acts. We combine state-of-the-art techniques and extend them. Our open engine builds a live graph of who is sharing what by continuously monitoring social media. We enrich it with custom detection AIs, such as transformer neural networks that understand text like vaccine side-effect discussions even before government agencies hear about them, and graph neural networks for boosting traditional algorithms with social intelligence. End-to-end GPU acceleration enables analysts to get results faster and visualize more.
It’s easy. We are now shifting towards self-serve UIs and APIs to help scale interventions.
Ultimately, we are targeting 62% of the current world population: everyone using digital devices should be able to dial down medical misinformation just like they can other spam.
We are prioritizing the worldwide COVID infodemic, and especially for at-risk communities such as cancer patients. Our initial focus on social media means analyzing half the world, and as our prototype started with Twitter, already analyzes 20% of US adults. (And their many off-Twitter social circle.) To ensure fit, we are taking an approach of participatory design for the intervention side of our efforts and state-of-the-art AI methods for analytics.
To aid digital platforms, our team had a head start in knowing what to build as many of us come from security and fraud groups already aiding digital platforms on analogous problems like spam. Just as spam filters are ubiquitous in digital platforms, so should medical misinformation AI. We are opening APIs to digital platforms, which will enable those developers to focus more on interventions and less on having to build a world-class misinformation AI team... and more likely, do nothing. For example, newspaper sites often must choose between having a comments section vs inadvertently promoting misinformation, so this gives them control back. Likewise, individuals expect their email providers to provide configurable spam controls. Surprisingly, supporting a few platforms can give outsized reach. For example, Facebook has billions of users, and Wordpress is used by 40% of all websites. Providing misinformation AI capabilities to digital platforms goes a long way towards giving control back to their users.
Addressing US COVID infodemic handling is critical. Over 500,000 Americans have died, which is far more than any other country, and 37% of US adults are still vaccine hesitant, which is 4X worse than for vaccines like against polio. The problem is now largely of mass deradicalization, so we are focusing on empowering digital platforms and investigators with public awareness track records. We are directly partnering with pilot groups to ensure we are delivering the desired capabilities. Important to note, we already found our approach is generalizing to other kinds of misinformation, and we are already aiding interventions beyond COVID.
An important dimension when battling COVID and other medical misinformation risks is recognizing they disproportionately hurt specific demographics. For example, about 40% of adults will develop cancer, and cancer is the 1st or 2nd leading cause of death for those under 70: cancer communities need extra support. In the case of COVID, cancer has the additional complication that treatment often compromises a patient’s immune system, making patients even more susceptible to COVID. We are working with The Light Collective to directly engage with cancer ePatient communities, initially through town halls, self-serve tools, and providing moderation intelligence tools. Crucially, we are listening: before we scale the approach to many communities, we are working with cancer subcommunities to meet their requests.
- Prevent the spread of misinformation and inspire individuals to protect themselves and their communities, including through information campaigns and behavioral nudges.
Project Domino democratizes medical misinformation AI. As COVID disastrously showed, misinformation is not solved by healthcare worker FAQs and chatbots. Practitioners recommend acting before misinformation spreads and victims radicalize. Our priority stakeholders are thus instead digital platforms, investigators, and civic groups. Through them, we work at global scale. Excitingly, while we prioritize COVID, our techniques generalized enough to also make non-COVID headlines.
Project Domino provides a social sensor with surprising speed and reach. For example, we tracked self-reports of vaccine side effects as soon as people tweeted them, and thus before results reached central authorities and crossed EHR jurisdictions.
- Pilot: An organization deploying a tested product, service, or business model in at least one community.
Our prototype proved our technology, tested open AI appetites, and already made headlines:
Tech: Our 500,000,000 entity knowledge graph successfully continuously monitors social media and runs medical AI detections around COVID and misinformation tasks
Openness: We're proving the appetite. Diverse people and partners regularly volunteer, including 100+ data people arriving at our volunteer channel; hw/sw donors like Google and Neo4j help us scale; Work with the wider open source community on social/misinfo data pipelines
Results: We successfully map misinfo sharing, detect vaccine side-effect self-reports, and even published early warnings of large voter suppression campaigns
Our pilot phase iterates with representative intervention partners. We recruited our first: The Light Collective, for working with ePatient communities (19M), and Social Forensics, for investigative journalism published by global news organizations. Twitter covers 20% of US adults and 6% world-wide. The pilot phase establishes our growth playbook.
- A new business model or process that relies on technology to be successful
Medical misinformation prevention needs a revolution.
Vaccine anthropologist Heidi Larson, commenting on misinformation driving vaccine hesitancy, recently stated, “We should look at rumors as an ecosystem, not unlike a microbiome.”
Figure: Automatically mapping coordinated misinformation spread
When misinformation spreads - such as closed patient groups on Facebook and comments on WordPress blogs - information dissemination techniques like after-the-fact FAQs are too little and too late. The problem is traditional healthcare solutions for medical information dissemination were not designed to withstand persistent adversarial radicalization attacks. Security and IT teams fighting malware and spam do build tools against persistent threats, just not for medical domains. We need to combine the two.
Technically, Project Domino is on the vanguard of medical misinformation AI. Few groups are able to simultaneously pursue fused forensics/NLP/social network analysis, continuous large-scale automation, open innovation, and then specialize it for medical domains.
Our open approach aims to revolutionize the work of key stakeholders fighting medical misinformation:
Data teams around the world can switch from competing to collaborating
Digital platforms gain pluggable tools that they can use just like the do for spam and malware, helping billions of people
From journalists to academics, more investigators can perform data-driven investigations, which is currently out of reach of most
Groups like corporations and ePatient communities can go from nearly nothing to targeted defenses
Medical misinformation prevention is desperately overdue for its industrial revolution. To bring it about faster and at scale, Project Domino is building open misinformation AI infrastructure.
- Artificial Intelligence / Machine Learning
- Audiovisual Media
- Behavioral Technology
- Big Data
- Crowd Sourced Service / Social Networks
- GIS and Geospatial Technology
- Software and Mobile Applications
- Pregnant Women
- Elderly
- Rural
- Peri-Urban
- Urban
- Poor
- Low-Income
- Middle-Income
- Minorities & Previously Excluded Populations
- Persons with Disabilities
- 3. Good Health and Well-being
- 4. Quality Education
- 8. Decent Work and Economic Growth
- 9. Industry, Innovation and Infrastructure
- 10. Reduced Inequality
- 11. Sustainable Cities and Communities
- 16. Peace and Justice Strong Institutions
- 17. Partnerships for the Goals
Within 5 years, we hope to reach 5+ billion people through incorporation into digital communication software platforms such as social networks and email providers.
We work with several types of intermediaries.
Misinformation investigators whose results are successfully reaching wide audiences through articles on major news mediums like the BBC (438M viewers), documentaries like on HBO (140M subscribers), and going viral on social media. We aim to grow the number of investigators, and through them, audiences becoming aware of the results.
Community organizations such as the Light Collective who directly engage with especially at-risk communities such as cancer ePatient groups. We aim to reach 10K-100K people this year by joining digital town halls, and as we establish viral & self-serve community patterns, many more people next year. Likewise, we will launch services for large organizations, e.g., Walmart employs over 2,000,000 people.
As described above, we aim to contribute our medical misinformation intelligence to the digital platforms serving 4+ billion people, especially for social networks and email providers who already use spam filtering services
As we are moving from the prototype stage to piloting, we are focusing on proving value and replicability.
Value: A trailing external indicator is whether results of Project Domino analyst efforts get published, which include events like the recent HBO documentary on QAnon and articles in national/international news agencies like ABC. Internally, we use standard AI quality measurements like timeliness, scale, and AUC curves. We have already analyze 200M+ COVID conversations from 20%+ of US adults and 6% of adults worldwide, and presented novel and difficult-to-achieve results by using various deep learning methods: we expect these results to keep growing.
Replicability: We look at the individual reach of a partner organization like The Light Collective and communication platforms, how self-serve we can make our assistance, and how many organizations are like them. For example, while The Light Collective is small, the ePatient community is large, and digital town halls enable disproportionate reach and a model for self-serve replication.
As we move from piloting to growth, we will consider more traditional traction numbers. These include the number of people ultimately reached on a recurring basis through our partners. To get there, common operational metrics include partner NPS scores, growth metrics like virality coefficients, and AI quality metrics like global data coverage and AUC curves.
- Not registered as any organization
Leo Meyerovich, CEO, Graphistry
Cody Webb, researcher
Geoff Golberg, CEO, Social Forensics
Andrea Downing, President, The Light Collective
Volunteers/advisors:
Sean Griffin, CEO, DisasterTech
Anita Nikolich, Director, UIUC
Julie Wu, Fellow, Stanford
Unnamed, crime
Dave Bechberger, Graph Architect, Amazon
Michele Catasta, Scientist, Stanford
According to the World Health Organization’s infodemic research, tactics must aim for 4 types of activities:
Listening to community concerns and questions
Promoting understanding of risk and health expert advice
Building resilience to misinformation
Engaging and empowering communities to take positive action
The public’s understanding of healthcare is deeply rooted in their personal experiences and influenced by peers on social media. Their experiences vary widely by income, health status and social determinants of their health.
To design for impact, Project Domino brings together leaders from key technologies and impacted patient communities. We value expert advice and participatory design with the community.
Our intervention partner The Light Collective is at the center of patient advocacy for at-risk groups with large followings on social media. Their constituency positions us well to further develop use cases and strategies for adoption.
Our other intervention partner Social Forensics has investigated a variety of internationaal topics with a network of journalists at top news organizations. It is in a good position for wide dissemination. Likewise, it is representative of how modern OSINT and investigative journalism works.
Graphistry is contributing its technology, expertise, and network. It is a top tool of choice for digital forensics, anti-fraud, misinformation, and social media research regularly works with top teams on these problems. It helped bootstrap recruitment, gaining early volunteers from leading academic institutions (ex: Stanford), misinformation experts, and AI/data organizations (ex: Amazon, Neo4j).
Long-term, our advantage will steadily grow from network effects around our data and intervention partners.
We follow several key principles:
Trustworthy. Misinformation is politically rife, and taking political positions may alienate the victimized groups we want to help, so our primary political stance is pro-information and pro-security. Within reason, we are otherwise neutral. Supporting diversity and inclusion fall under supporting neutrality. Our grassroots team is thus diverse on lines such as gender, education, nationality, profession, etc.
Our AI ethics follows a simple guiding principle: no aggregation without representation. For us this means that any analytics we produce must be co-designed by online communities who are directly affected by problems we’re working to impact.
The Light Collective is a nonprofit organization that directly serves the leaders and organizers of patient communities on social media. Their diverse constituency not only represents patient populations seeking health information, but their leadership includes diverse community leaders.
- Organizations (B2B)
We are excited for a variety of aspects of Solve. As we move from pure volunteering and prototyping to pilots and growth, Solve advances our mission through funding, in-kind services, and visibility.
- Financial (e.g. improving accounting practices, pitching to investors)
- Legal or Regulatory Matters
- Public Relations (e.g. branding/marketing strategy, social and global media)
- Monitoring & Evaluation (e.g. collecting/using data, measuring impact)
- Technology (e.g. software or hardware, web development/design, data analysis, etc.)
As we're beginning more significant community interventions and gathering financial support, we can use operational support: grant writing, legal agreements, etc. Likewise, we can always use more data science and development help!
Our prototype phase successfully achieved impressive results, and showed the type of impact we can have by enabling self-serve interventions. Remarkably, our unpaid volunteers broke national news twice, first with detecting a digital voter suppression campaign for the US Senate runoff elections, and then again by playing a role in revealing the identity of QAnon. Unpublished, we also succeeded beyond expectations on AI tasks like vaccine side-effect social sensing.
We have momentum and are transitioning to committed pilots. To support these commitments, we are gathering grants. The pilot phase will unlock scale for more effective and self-serve solutions, and in turn, sustainable operations.
To enable intervention partners and an open source ecosystem, we need a classic dedicated data product team. A small group can go a long way, especially given our open source model enables us to solicit help on smaller pieces. Likewise, to ensure we’re building for scaling interventions, we want to support 2 representative groups on impactful problems. The result will be those use cases will have direct impact, and we will be ready to scale by helping similarly structured groups, yet with only a fraction of the work for each. Excitingly, that’s the beginning of scale out!
We like to collaborate with data science technology partners, and for interventions, with investigators & journalists, digital platforms of all sizes, and community organizers.
The Light Collective is a good example of a community group, and upon success, we are interested in other groups, including government and commercial.
For digital platforms, we’ll be looking into websites top 50 social media sites and newspapers that likely have misinformation problems but not satisfactory anti-misinformation tooling.
Technology partners like Google, Microsoft, Facebook, ...: Our GPU and data pipeline hardware resources were contributed by sponsors, and we would love continued support of this kind! Likewise, we are eager for collaborations on their non-proprietary models.
Legal: Open governance and data usage agreements matter, yet lawyers are expensive. We would rather spend any funding on our makers and interveners, so legal support would be amazing.
- Yes, I wish to apply for this prize
With over 500,000 Americans dead, the COVID pandemic has tragically shown how unprepared America’s digital infrastructure is for protecting the general population and at-risk groups against medical misinformation. Project Domino’s volunteer-based efforts have gone far: we’ve built world-class social media medical AI, forged top-tier partnerships, and even helped uncover QAnon leadership and African American voter suppression campaigns. We are now working on paths that scale to ubiquitous use.
Sadly, we can only stretch nights and weekends so far in how much we can serve our communities and execute on our medical misinformation technology opportunity. All too often, we must make difficult choices between working on interventions for communities like cancer patients and marginalized communities, AI infrastructure, new investigations, and fundraising. The problem is extreme: it is frustrating to lack the resources to act.
Success begets success. We are raising $500K-$2M from public and private groups. An initial grant would go far in providing resources and establishing fundraising momentum. An exciting aspect of our scalable approach is that supporters will not only help advance our AI technology and health interventions, but also bootstrap our approach to scaling innovation and sustainable revenue through open AI.
America’s approach to medical misinformation is akin to pre-industrial revolution times. With support, we are evolving misinformation AI to be as easy and accessible as spam filters and autocorrect. The security community has achieved that for problems like malware, and America urgently needs it done for medical misinformation.
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize
With over 500,000 Americans dead, the COVID pandemic has tragically shown how unprepared America’s digital infrastructure is for protecting the general population and at-risk groups against medical misinformation. Project Domino’s volunteer-based efforts have gone far: we’ve built world-class social media medical AI, forged top-tier partnerships, and even helped uncover QAnon leadership and African American voter suppression campaigns. We are now working on paths that scale to ubiquitous use.
Sadly, we can only stretch nights and weekends so far in how much we can serve our communities and execute on our medical misinformation technology opportunity. All too often, we must make difficult choices between working on interventions for communities like cancer patients and marginalized communities, AI infrastructure, new investigations, and fundraising. The problem is extreme: it is frustrating to lack the resources to act.
Success begets success. We are raising $500K-$2M from public and private groups. An initial grant would go far in providing resources and establishing fundraising momentum. An exciting aspect of our scalable approach is that supporters will not only help advance our AI technology and health interventions, but also bootstrap our approach to scaling innovation and sustainable revenue through open AI.
America’s approach to medical misinformation is akin to pre-industrial revolution times. With support, we are evolving misinformation AI to be as easy and accessible as spam filters and autocorrect. The security community has achieved that for problems like malware, and America urgently needs it done for medical misinformation.
- No

Founder & CEO

Co-Founder, BRCA Advocate, Security Researcher