Interpret Me
Black youth expressions on social media posts often misinterpreted by the reporting professionals due to personal biases, and lack of socio-cultural understanding and knowledge. These misinterpretations cause several challenges to the black communities including incarceration, denial of access to education, employment, housing, and welfare benefits due to negative digital footprints.
Interpret Me is an Immersive simulation tool to help professionals working in police stations, media, and school district offices, to better understand social media discourse by youth in their communities. Interpret Me is embedded with AI-generated, human-in-the-loop feedback for stakeholders who engage with social media and online predictive reporting algorithms to make decisions about speculative social media.
Interpret Me significantly minimize the personal biases of professionals who are creating negative digital footprints for black communities. It promotes a safe and inclusive digital space for black communities which enables equal access to education, employment, housing, and welfare benefits.
Jelani Henry of Harlem was arrested on two accounts of murder, with the evidence being Facebook photos suggesting an affiliation with a local gang. He spent 19 months in Rikers for a crime he did not commit. Lamanta Reese shared his life on YouTube, sometimes posting videos of himself and others taunting rival crews in the Southside of Chicago. At the age of 19, Lamanta was shot 11 times on his front porch. Some believe that a rival crew interpreted a smiley-face emoji posted by Lamanta to be a slight against his rival’s mother. Even after the incident, the true understanding of the post remains unclear.
Social media posts can be incredibly difficult to understand because they can be highly contextualized and hyper-local in nature, and misunderstandings have real consequences: interpersonal violence, police officers arresting innocent people, administrators barring students from opportunities, reproducing pathologizing stories about vulnerable communities. Individuals in these communities are particularly subject to biased and unjust punitive actions based on their social media content, creating what is termed the “Digital Stop and Frisk”. The aforementioned monitoring and interpretation processes play a role among many other systemically racially biased factors in compromising the livelihoods of community members.
Interpret Me is a web-based tool to promote self-reflectivity to understand and interpret social media posts in a community-centered approach to address possible biases and harm to vulnerable communities.
To interpret social media with a greater focus on context, bias reflection, and restorative justice we aim to develop and refine Interpret Me - a learning simulation intervention platform that trains law enforcement officials, journalists, and educators to recognize racism in their interpretation of social media posts by Black people. By partnering with a community organization that will act as advisors and co-designers in the training development process and the Stanford Social Media Lab, we will ensure the intervention is community-driven.
Immersive simulations offer a powerful suite of tools to help adults, such as those working in police stations, media, and school district offices, to better understand social media discourse by youth in their communities. Interpret Me is embedded with AI-generated, human-in-the-loop feedback for stakeholders who engage with social media and online predictive reporting algorithms to make decisions about speculative social media. Our simulations will provide continuous feedback and self-reflection for users to learn and establish a new vocabulary for culturally aware and ethical social media risk assessment.
The Interpret Me will influence police officers, educators, and journalists in several important ways, including a) expanding their knowledge about contexts and cultural conventions underlying social media content, b) practicing interpreting nuanced social media content, c) addressing and reflecting on their own personal and interpretive biases, d) restoring equitable practices of human rights and restorative justice within themselves and their profession, and ultimately, e) supporting marginalized communities to have a safe digital footprint to access social justice and welfare benefits by community-centered interpretation of social media post.
- Actively minimize human and algorithmic biases, particularly in healthcare, education, and workplace settings.
The personal biases of reporting professionals resulted in misinterpretation of social media posts by BIPOC and its creating negative digital footprints which limit their access to education, employment, and welfare benefits.
Interpret Me is embedded with AI-generated, human-in-the-loop feedback for stakeholders who engage with social media and online predictive reporting algorithms to make decisions about speculative social media. Our simulations will provide continuous feedback and self-reflection for users to learn and establish a new vocabulary for culturally aware and ethical social media risk assessment.
BIPOC communities to have a safe digital footprint to access social justice by community-centered interpretation.
- Pilot: An organization deploying a tested product, service, or business model in at least one community.
Interpret Me prototyped and tested with stakeholders. We are ready to pilot with reporting professionals (police officers, educators, and journalists) to address technical and practical challenges. After successful completion of the pilot, Interpret Me will be scaled as a web and app-based system for the above professionals to use the service on a subscription basis.
The Interpret Me app/platform can be customized to any stakeholders who are actively engaged in online and social media content interpretation such as media companies and those in the criminal justice system, school districts, non-governmental organizations, and social media companies.
- A new application of an existing technology
Interpret Me aims to humanize social media content and help stakeholders learn about restorative alternatives to punitive action. This work will engender trust and strengthen relationships between professional stakeholders and communities being surveilled online to promote just and equitable reporting in law enforcement, media, and schools. There is no other tech tool that offers multi-stakeholder engagement with community participation to prevent bias and ensure justice and equity in the reporting process.
- Artificial Intelligence / Machine Learning
- Behavioral Technology
- Minorities & Previously Excluded Populations
- 10. Reduced Inequality
- 16. Peace and Justice Strong Institutions
- 17. Partnerships for the Goals
- California
- Connecticut
- Massachusetts
- New Jersey
- New York
- California
- Connecticut
- Massachusetts
- New Jersey
- New York
- Other, including part of a larger organization (please explain below)
SAFELab is an interdisciplinary research lab based in the Columbia School of Social Work, New York City.
We have two full-time team members
Four part-time team members
Two volunteers
The interdisciplinary Interpret Me team is comprised of social workers, data scientists, psychologists, and communication experts that will analyze users’ responses for common assumptions around posts, historically biased language in social media flagging, and the extent that users fairly consider broader details in line with a post’s context (i.e. sarcasm, loss, grieving, song lyric).
SAFELab believes in and practices safety, inclusion. equitable opportunities for diverse communities.
- Organizations (B2B)
- Business model (e.g. product-market fit, strategy & development)
- Public Relations (e.g. branding/marketing strategy, social and global media)
- Product / Service Distribution (e.g. expanding client base)
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize
Interpret Me is a web-based tool to promote self-reflectivity in reporting professionals to understand and interpret social media posts in a community-centered approach to address possible biases and harm to vulnerable communities.
The Interpret Me will influence police officers, educators, and journalists in several important ways, including a) expanding their knowledge about contexts and cultural conventions underlying social media content, b) practicing interpreting nuanced social media content, c) addressing and reflecting on their own personal and interpretive biases, d) restoring equitable practices of human rights and restorative justice within themselves and their profession, and ultimately, e) supporting marginalized communities to have a safe digital footprint to access social justice and welfare benefits by community-centered interpretation of social media post.
The Interpret Me will serve as an anti-racist gatekeeper for reporting professionals working on/with/for online and social media platforms.
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize
- No, I do not wish to be considered for this prize, even if the prize funder is specifically interested in my solution
- Yes, I wish to apply for this prize

Associate Director