AI Ethics Non-Profit Ecosystem: Where is the Diversity?

Mia Shah-Dand
5 min readFeb 26, 2021
Photo by kili wei on Unsplash

When Ian Moura and I first started compiling a list of non-profits in the space of AI Ethics, our intent was to map the ecosystem to understand what type of work was being funded and by which organizations. That’s when we noticed that people leading these organizations were overwhelmingly male and so I decided to take a closer look to validate our initial observation. (You can read some of our thoughts on the need for greater accountability for AI Ethics gatekeepers here)

It wasn’t our imagination. Women account for only 1 in 4 on these Leadership teams and while the number jumps up significantly to nearly 40% for the Boards and Advisory Councils, it’s mainly because of the high representation of women on the UCLA C2i2 and MIT Media Lab Advisory Councils.

Although they are all immensely qualified, some women are either related to the donors/founders personally (Example: Max Tegmark and his wife Meia Chita-Tegmark both of whom are on the leadership team for Future of Life Institute) or professionally (Example: Andrew Critch and Anna Salamon co-founded Center for Applied Rationality (CFAR)), which further highlights the power of network and personal connections in getting access to these organizations.

We also noticed that the same names showed up multiple times across various organizations in various influential roles. For example, Stuart Russell is a founder, board member, faculty advisor, and holds other influential roles at least 6 organizations out of the 30 we reviewed while Nick Bostrom is in a leading role at 4. This highlights a broader trend where a handful of white men wield significant influence over millions and potentially billions in AI Ethics funding in this non-profit ecosystem, which is eerily similar to the AI tech for-profit ecosystem.

Some of these organizations, centers, and initiatives are housed within academic institutions and universities while others are stand-alone entities. Many organizations especially those housed within prestigious institutions like Stanford and MIT lack transparency and accountability around their funding sources.

Despite visible statements about transparency on their websites, the operations of these nonprofits are black boxes. They increasingly appear to be some billionaire’s pet project and used to launder the reputations of their wealthy donors. For example: Blackstone Group CEO, Stephen Schwarzman who is on the Stanford HAI Advisory council was called out most recently by participants in a master’s program he funded because of his support for politicians who refused to certify President Joe Biden’s election in a clear repudiation of democratic norms.

Even when organizations use seemingly objective criteria for funds allocation, the outcome still seems to be skewed in favor of white men, signaling that the leadership team’s own implicit and explicit biases may be directing funds away from non-white grantees. As per its website, Survival and Flourishing Fund was initially funded in 2019 by a grant of approximately $2 million through a donation from philanthropist Jaan Tallinn. Upon closer inspection we found that many of the past grants by SFF went to other nonprofits founded by and/or affiliated with Tallinn, which are also mostly led by white men.

In 2020 Q4, SAF hosted a competition for individuals seeking a fixed amount of funding for projects benefiting Survival and Flourishing organization’s mission and allocated approximately $350k, which also went to projects mostly led by white, young men. This is just one of many examples of how the homogeneity of these leadership teams may be influencing the allocation of funding disbursed by these organizations.

Many of these non-profit mission statements have a statement outlining their commitment to addressing “existential risks” in the future while simultaneously excluding marginalized communities facing these existential risks from said AI technologies today.

Where do we go from here?

While for-profit tech companies continue to cater to their shareholders, we expect better from these non-profit AI Ethics gatekeepers who should be held to a higher standard given their tax-exempt status. We shouldn’t act as though their monetary contributions to research on the ethics of AI is somehow separate and distinct from the ethics of the funders influencing their work. If anything, Google’s recent dismissal of Dr. Timnit Gebru, an eminent Black AI researcher in this space highlights the need for more transparency and increased accountability.

We call on these highly influential organizations and their donors, including their academic sponsors to make a genuine commitment to ethics and inclusion through annual reporting of diversity metrics specifically on race and gender. We also ask for more transparency and consistency in annual reporting of all significant donors along with their donation amounts. (Here’s looking at you, Stanford!)

We firmly believe there needs to be more transparency on how and why people are selected to serve on the boards of these powerful and well-funded organizations, which means having a transparent recruiting and appointment process. We can’t understate the role of “networking” and peer/personal connections in maintaining systemic racism, sexism, and other forms of inequality. Which makes it even more critical that we require these organizations to disclose how they appointed their board and their leadership team, especially whether donations influenced their nomination.

If the powers-that-be at these organizations are not inclined to be more inclusive, we propose legislation similar to SB826, which has introduced a legal requirement to appoint women to corporate Boards. Similarly, we recommend that AI Ethics non-profit organizations with significant funding should be required to appoint at least 1woman to every leadership team and paid Board positions. While, inclusion doesn’t always translate into influence but it’s the first step towards a more equitable state.

While the data we shared is merely a snapshot of the overall AI Ethics non-profit ecosystem, we hope our work will spark much-needed conversations on who should be represented at these organizations and influence their funding priorities. Eloquent speeches and PR statements are not enough to mitigate the risks to marginalized communities when much-needed funds are diverted to solving some billionaire’s prioritized risks. For many in marginalized communities across the world, it’s a luxury that we can ill-afford during these perilous times.

Methodology:

We’ve uploaded a small subset of data from our larger project here, which includes the list of women and Black people on the leadership of 30 AI Ethics non-profit organizations. “Women” and “Black” classifications were based on visual inspection, review of online profiles, and biographical information of the leadership and board members for these organizations. We included fiscally sponsored projects with significant funding along with organizations registered as non-profits, charities in their country of incorporation. We shortlisted organizations who stated their area of focus was AI safety, ethics, societal implications of AI or computing systems, algorithmic accountability, existential risk from AI, governance of AI, or similar.

The donation data was gathered from public sources including websites, press releases, and most recently filed tax documents, where available. There are no standard titles, structure, or clarity on roles/responsibilities at these organizations, so we had to use our best judgement in determining who was responsible and influential in guiding activities and funding for these organizations.

AUTHOR’S NOTE:
It’s our sincere hope that folks from the Black community will use the compiled data from this self-funded project to lead a much-needed conversation on the need for more representation.
In solidarity.

--

--

Mia Shah-Dand

Responsible AI Leader, Founder - Women in AI Ethics™ and 100 Brilliant Women in AI Ethics™ list #tech #diversity #ethics #literacy