The State of AI Ethics Report
Abhishek Gupta is the founder of the Montreal AI Ethics Institute (MAIEI) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He is a Visiting AI Ethics Researcher, Future of Work in the International Visitor Leadership Program with the U.S. Department of State, the Responsible AI Lead, Data Advisory Council for the Northwest Commission on Colleges and Universities, AI Advisory Board Member for Dawson College, Associate Member of the LF AI Foundation at The Linux Foundation, and a Faculty Associate in the Frankfurt Big Data Lab at Goethe University. Abhishek’s research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns using AI in different domains. He has built the largest community-driven, public consultation group on AI ethics in the world that has made significant contributions to the development of many initiatives in the domain of responsible AI.
We are committed to elevating diverse and unheard voices in the world and meaningfully engaging them in shaping technical and policy measures for the development and deployment of AI systems. These AI solutions are being developed without the inclusion of those who are most impacted by their use. The wider audience is seeking signals in the noise in the responsible AI ecosystem. We seek to equip and empower these stakeholders so they can have informed participation in how this technology is designed, developed, and deployed. The State of AI Ethics quarterly report serves as a handy map to orient diverse stakeholders in the rapidly evolving field of responsible AI by digesting and presenting technical and policy R&D in an accessible format which would otherwise be buried in jargon and arcane research documents. Through globally informed and engaged participation, AI can be used to its full potential to usher eudaimonia.
AI is expected to contribute upwards of $16-trillion to the global economy by 2030. A key challenge is that the adoption of AI could widen gaps among countries, companies and people, further increasing gaps in income inequality and digital literacy. AI will have an impact on all aspects of society, from healthcare and education to finance, immigration and more. Especially in places where the impact will be in aspects of our lives that are significant, we need to ensure that these systems don’t amplify the inequities and status quo such that the already marginalized are harmed further. This is a global issue where millions of people are being affected in how loans are allocated, parole decisions are made, freedom of speech is controlled via online content moderation, etc. In response to this, while there are numerous organizations that have been working on sets of AI principles, frameworks, and other theoretical guidelines, the missing piece that is now starting to surface is the bridging of the gap between proposed technical and policy measures, and operationalizing them. The State of AI Ethics report offers easy navigation across the technical and policy domains digesting and presenting information in an accessible and actionable manner.
The State of AI Ethics Report https://montrealethics.ai/the-state-of-ai-ethics-report-june-2020 published on a quarterly basis is designed to be a definitive summary document that captures key developments in the fast-changing field of responsible AI from the perspective of our global community. Through our programs, our goal is to bring forward perspectives from diverse audiences that are oftentimes overlooked in the AI and ML communities, exclusionary by design where the emphasis is on heavy credentials and traditional academic backgrounds. We capture perspectives from our global community primarily through our regular open AI Ethics workshops held every 2-3 weeks, enabling active citizen participation from around the world. Participants come from diverse backgrounds such as computer science, law, sociology, business, government, art, music and philosophy. These workshops were previously limited to in-person in Montreal and Toronto, but are now open to everyone online regardless of geography given the COVID-19 pandemic. We have seen a marked increase in the geographical spread of participants with our switch in recent months to hosting online events. Our participants come from Canada and the United States, as well as Croatia, Germany, India, Mexico, the Philippines, South Africa, Switzerland, Tunisia, Ukraine, and the United Kingdom, to name a few.
Our State of AI Ethics Report is a pulse check on the state of discourse, research, and development of AI, serving as a policy guide and practical roadmap for researchers, practitioners and policymakers who are making important decisions on behalf of their organizations and governments when considering the societal impacts of AI-enabled solutions. Viewed through a temporal lens, the quarterly State of AI Ethics Report also serves as a historical living document that captures the progress, challenges, limitations, and warning signs, as humanity marches forward towards an increasingly algorithm-driven world. Our programs, designed to be truly inclusive with global participation, enable participants to work together and learn from each other, leveraging diverse and global expertise while pooling insights towards impactful solutions. For organizations and governments that have public requests for feedback (e.g. the Government of Scotland and their national AI strategy https://montrealethics.ai/response-to-scotlands-ai-strategy), our workshop approach offers an opportunity for partner organizations to receive comprehensive and in-depth feedback. Their participation is critical in shaping which content forms the report and how it is presented so that it meets their needs head-on.
- Elevating understanding of and between people through changing people’s attitudes, beliefs, and behaviors
AI is expected to have a profound impact on our global economy, reshaping our communities and how we operate with one another, as well as with ourselves. Our goal with the quarterly State of AI Ethics Report alongside our approach to global civil society community building, is to be the definitive guide on the societal impacts of AI as humanity marches forward towards an increasingly algorithm-driven world. We believe this strongly aligns with elevating understanding of and and between people through changing people’s attitudes, beliefs, and behaviors, not only in the short-term but in the decades to come.
We have been building the Montreal AI Ethics Institute community from the ground up since July 2017 with the core pillar being equipping and empowering diverse stakeholders and everyday citizens to meaningfully engage in the shaping of technical and policy measures for the development and deployment of AI systems. What we observed was that there was an explosion of interest in the field which created a lot of entrants, each of whom put out new sets of guidelines, principles, frameworks, and so on, that were often overlapping but overwhelming for those who just wanted guidance in taking these ideas and implementing them in practice in their everyday work. Especially for individuals whose primary job function isn’t in responsible AI and they have an additional responsibility to integrate this into the research and work they are doing, this was particularly challenging. We wanted to create the definitive guide that would help such people navigate this space by capturing the most significant developments in the responsible AI space in a jargon-free and accessible manner. The staff at MAIEI along with our global community are equal contributors in surfacing and making this knowledge base accessible to the rest of the community.
We are passionate about this project and have been active participants in this domain for several years. However, as individuals from minority and non-traditional academic backgrounds ourselves, we have found that we weren’t always welcome additions in conversations as they related to the technical and policy measures in building responsible AI. Many of these conversations were happening behind closed doors in ivory tower settings that hindered horizon-expanding perspectives and often excluded lived experiences, further perpetuating harm. We wanted to break that model open to not only have unheard voices be acknowledged but to showcase there was tremendous value in incorporating lived experiences and diverse perspectives. To us, responsible AI development holds the same level of importance as democracy in the sense that it is something that affects us all. Hence, it requires active and engaged participation from all of us to be done in a manner that benefits everyone. Furthermore, there is tremendous value in hearing from those closest to the problems that AI solutions are trying to address. They are the ones who have the most cultural and contextual information to shape the systems in a way that will impact their respective communities in the most positive way possible.
Since July 2017, we have grown our MAIEI community to over 3100+ members and have hosted over 50+ workshops with many different partner organizations. Our engine of growth has been, and will always be, our regular AI Ethics workshops held every 2-3 weeks that enable active citizen participation from around the world. These workshops, previously limited to in-person in Montreal and Toronto, are now open to everyone online globally. Previous workshops and examples of work we’ve contributed to, include: The Government of Scotland’s AI strategy https://montrealethics.ai/response-to-scotlands-ai-strategy; The Office of the Privacy Commissioner of Canada and their amendments to PIPEDA relative to AI https://montrealethics.ai/response-to-office-of-the-privacy-commissioner-of-canada-consultation-proposals-pertaining-to-amendments-to-pipeda-relative-to-artificial-intelligence.
We keep conversations alive between workshops on our public Slack channel https://bit.ly/maiei-learning-community where anyone can join. We facilitate our programs and share resources within and across our community in an open-source and open-access manner https://montrealethics.ai/our-open-access-policy. We have Learning Communities that meet on Zoom every two weeks to enable our members to do deep dives on key topic areas in AI. Our Co-Create Program is a space for people in our community to find each other and co-develop responses to Calls for Proposals/Papers (CFPs) from conferences and academic journals. We do this to enable our members from all academic and non-academic backgrounds to work together on cross-disciplinary challenges by lowering the barriers to entry for people who don’t have experience in the traditional academic publishing model.
We had initially set out to build MAIEI in the mould of a traditional research institute: anchored in a physical location with close ties to an established university. We have since found that our strength lies in our independence combined with our digital-first and open strategy towards global community building (accelerated in large part due to COVID-19). We are also a non-profit organization with a bootstrap startup mentality and have been operating on a lean budget for the past three years. MAIEI has largely been self-funded by the founders Abhishek Gupta and Renjie Butalid, with pro bono contributions from staff members. And yet, we have had considerable impact to date given our extremely limited financial resources. The hard part of building a global community from the ground up, alongside credibility in the process, has already been done. We have established MAIEI as a dominant force and trusted resource in the global ethical AI space, by virtue of the engagements we have had with partner organizations that range from: G7 Multi Stakeholder Conference on AI, Dutch Government, Microsoft and Shopify, to The Mozilla Foundation, The Linux Foundation, Oxford Internet Institute, and United Nations AI for Good Summit, to name a few.
While there are many programs and organizations that claim to place inclusion at their core, few are able to put that into practice when the rubber meets the road. Inclusion takes numerous shapes and often ignored dimensions include family commitments, time barriers, financial constraints, and immigration limitations. In truly empowering local champions to elevate the voices of their communities in the discussions on responsible AI, our team took a radical approach last summer to offer a unique remote-only, digital-first internship on AI ethics. The goal was to make it accessible along all the dimensions mentioned above. We were able to mix in Asian, African, European, and North American perspectives through interns that hailed from all of these regions. The results have been stellar with these interns now helping to elevate voices in their communities, equipping and empowering individuals to actively participate in these discussions. As an example, one of the interns from our program last summer is now the Head of AI Ethics Policy for the JAIC, DoD, US Government where she is implementing this ethos on a practical basis everyday in her work.
- Other, including part of a larger organization (please explain below)
The State of AI Ethics Report is a quarterly publication produced by the Montreal AI Ethics Institute (MAIEI), a nonprofit research organization, with input from our global civil society community, that aims to be the definitive guide on the societal impacts of AI worldwide.
What makes this report unique is the fact that most initiatives are geared towards researchers and practitioners who are already familiar with technical and policy jargon and spend their time parsing through dense research and policy documents. But, there is an entire set of people who need to be able to access the knowledge that is locked in these works that they can implement in their everyday research and work so that they can truly build responsible AI systems.
The report serves as a beacon in changing theory into practice which is going to be essential as AI is used in more contexts and applications worldwide. From initial feedback on the report, educators and policymakers alike have found this to be an essential reading that is helping them quickly catch up with the rapidly evolving field of responsible AI while still being able to capture the breadth and depth of the research and development in the field through accessible format and language.
Through the act of gathering insights and translating them into accessible language, both the staff at MAIEI and the community at large have benefitted from thinking more critically about the changes that have taken place in the field of responsible AI over the previous quarter.
The output when generated in the format as it stands today has helped to demarcate the different arenas within which change is taking place, offering a birds-eye view of the pace of change that is unequally distributed across the different subdomains of responsible AI.
The outcomes from this concretely for the community have been that it has given them a voice to share their views and concerns as it relates to this space at the same time working together to dive deeper into the challenges that face this domain surfacing insights from lived experiences which would otherwise be lost in the tremendous cross-talk and overlapping work that is being done in the space within small, inaccessible circles.
Our pilot with the inaugural report has shown how we can leverage the power of community to bring forth really meaningful insights that go above and beyond those created by experts and instead help to ground this work within the context and culture that is local to these communities. This is something that will be crucial when it comes to operationalizing the responsible AI principles in practice.
- Women & Girls
- LGBTQ+
- Minorities & Previously Excluded Populations
- Persons with Disabilities
- 8. Decent Work and Economic Growth
- 9. Industry, Innovation, and Infrastructure
- 10. Reduced Inequalities
- 12. Responsible Consumption and Production
- 16. Peace, Justice, and Strong Institutions
Current number: ~3100 people
Number in one year: ~10000 people
Number in five years: ~500000 people
Our goal with this project within the next year is to showcase how useful a pulse check can be in guiding policy and research efforts so that they are focused on the most important initiatives and problems in the space rather than missing the mark and focusing resources and time on things that will have minimal impact in how responsible AI systems are developed and deployed. We have found already from the inaugural version of the report that it is being widely consumed in the government and education sectors as mandatory reading material for policy makers and advisors to government offices. In the educational sector, professors are using this as reference material to orient their students to the most relevant issues in the space and helping them gain a better understanding on how to build and deploy responsible AI systems.
Our goal in a five-year time horizon is to make this not only a regular feature that is consumed akin to the Mary Meeker Internet Trends report but also for it to serve as a historical record to map the trends that have taken place in this domain from the perspective of challenges, solutions, attitudes, and research focus areas over a long-time horizon such that it can become a leading indicator to catch emerging problems early on.
One of the biggest barriers we face is financial. Despite being a very minimally funded organization, we have been able to create impact that is comparable to larger organization that have 7-figure endowments and funds. That said, we have been able to stretch existing dollars far enough and have been creative in working with our global community to help us accomplish these lofty goals while still being quite nimble and agile.
Our impact could potentially be limited by the reach that we have which at a certain point will become a function of the number of dollars that we have available to fund further research, editorial efforts, community compensation, and professional production and distribution of the report so that it reaches the places and channels through which we want to make an impact.
Our plan to overcome these barriers at the moment is predicated on expanding our research staff capacity through volunteer efforts from the community that have been quite generous in sharing their time, resources, and expertise in helping us achieve these goals together. In addition to that, we have been actively pursuing many grant opportunities with the explicit purpose of better equipping ourselves to be able to scale impact to meet the challenges head-on as addressed in the previous question. Specifically, we have been looking at piece-wise grants that are specific to our local ecosystem in enabling us to meet parts of our needs through various initiatives.
In addition to this, we have been quite active in partnering with other organizations that have similar goals as we do and have been able to scale our impact through their networks and initiatives as well. Specifically, through our partner network we have been able to amplify the impact of our work by getting it in front of the right audiences so that they are able to use the report in critical decision making in building and deploying responsible AI systems.
We are partnered with a large number of organizations today, they are listed below:
- Australian Human Rights Commission
- European Commission
- G7 Multi Stakeholder Conference on Artificial Intelligence
- Government of Scotland
- Office of the Privacy Commissioner of Canada (OPCC)
- Prime Minister’s Office, New Zealand
- Treasury Board Secretariat, Canada
- ABB
- ARUP
- Deloitte
- Element AI - Espace CDPQ
- Expedia
- EY
- Fasken
- Lightspeed
- Maluuba
- Microsoft
- OVH
- PwC
- SAP
- Shopify
- Stradigi AI
- Acorn Aspirations
- AI Global
- Alberta Machine Intelligence Institute (AMII)
- DEFCON AI Village
- LF AI Foundation at The Linux Foundation
- Mechanism Design for Social Good (MD4SG)
- ML Retrospectives
- Montreal International
- Montreal Neurological Institute (MNI)
- Montreal NewTech
- Mozilla Foundation
- NeurIPS
- UpstartED
- College Ste-Marcelline
- Concordia University / District 3
- Dawson College
- Goethe University / Frankfurt Big Data Lab
- McGill University / Dobson Centre for Entrepreneurship, School of Continuing Studies, Building 21
- MILA
- Northwest Commission on Colleges and Universities (NWCCU)
- OCAD U
- Oxford Internet Institute
- Université de Montréal
- The Banff Forum
- International Network for Government Science Advice
- Partnership on AI
- United Nations / AI for Good Global Summit
- World Economic Forum
We do not yet have a business model
The State of AI Ethics report is meant to be an open-access and open-source initiative and as such we don't see it generating any revenue from it and hence the expenses related to its generation, publication, and distribution will always have to be funded through grant options.
We have not raised any funds currently and all of the associated expenses as it relates to the generation, publication, and distribution of this report are funded by the founders and staff members of MAIEI.
We seek to use grant mechanisms to raise money to fund this effort and foresee an amount of USD 25000 over the next 12 months as being crucial to accelerate the impact of the work being done on this.
Currently, we don't have any explicit expenses related to the report since they are subsumed and accounted for through the general expenses in the operation of MAIEI.
We are applying to the Elevate Prize for two reasons:
1. We believe that the Solve network consists of people who are really concerned about the impacts that emerging technology imposes on society and that aligns perfectly with what we are trying to achieve. Specifically, we see that this prize is a unique opportunity to highlight the disparities and gaps that exist in the conversations around building responsible AI systems today and in applying to this, we seek to leverage the funds and the network of Solve to further amplify the impact of our work by empowering even more unheard and ignored voices who are really being impacted by the decisions being made by AI systems.
2. When trying to get our State of AI Ethics report in front of decision-makers, we understand that it is very important that it be presented as a meaningful policy-decision-aid support that can help both technical and policy folks adequately capture the concerns in the space so that they can make policy and technical decisions that are grounded in the lived experiences and challenges faced by everyday people. Through the Solve network, we believe that we will be able to bring this work in front of the right eyes who can use this in places to make really impactful decisions.
- Funding and revenue model
- Marketing, media, and exposure
From a funding perspective, we really believe that it will act as an accelerant on an artefact that is already making significant impact in the educational and policy-making domains. We believe that through higher levels of funding and support behind this project, we will be able to bring in more researchers and compensate our community efforts to scale the impact that the report can create.
Getting this report in front of the eyes of these key decision makers in government, industry, and academia will be critical to the success of this initiative and through the guidance, support, and funding from Solve, we believe we can more quickly achieve the goals as stated in this application.
Ideally, we would like to expand our partnerships to other governmental, industry, and academic institutions in addition to the ones that we highlighted in our existing partners network.
Specifically, it would be great to work with multilateral organizations like the UN and WEF in more depth along with large academic institutions and their networks like MIT, Harvard, NUS, Peking University, IITs in India, AIMS in Africa, etc. so that we are able to bring forth this important piece of research in front of emerging scholars and practitioners so that they can incorporate this in their work.
Founder | Machine Learning Engineer