Dr. Timnit Gebru, Big Tech, and the AI Ethics Smokescreen

Mia Shah-Dand
5 min readDec 8, 2020
Photo by Echo Grid on Unsplash

This past week, news about Dr. Timnit Gebru, the eminent and beloved AI Ethics scholar fired by Google roiled the industry and highlighted the unsavory reality of being Black and ethical in a space dominated by powerful white men. It revived traumatic memories for many women of color who have faced gaslighting, exploitation, and erasure in the toxic tech industry. It also brought to light the broader issue of credibility and objectivity of AI ethics research funded by big tech.

Earlier this year, my colleague Ian Moura and I called out how elite institutions, the self-appointed arbiters of ethics are themselves guilty of racism and unethical behavior with zero accountability. A recent study unearthed that a significant number of faculty at top universities have received some form of financial support from big tech. Insidious influence of big tech shows up in framing of AI Ethics research, most of which is focused on solving ethical issues in such a way that AI development can continue unabated. Much of it is centered around risk mitigation on behalf of tech companies rather than well-being of marginalized communities.

This incident coincided with our annual summit where we brought together women working in AI Ethics space to learn from each other and celebrate the lesser-known voices in this space. As the selection committee was vetting the annually published 100 Brilliant Women in AI Ethics™ list, there was a vigorous debate on how to decide whether someone working for big tech with “ethics” in their title wasn’t just engaging in ethics washing? How many have the courage to call out unethical tech developed by their employer or straight up admit that the right solution is to ban said technology rather than try to redeem it?

Every so often these companies trot out AI Ethics luminaries to make an eloquent speech on the need for more ethical AI but when experts like Dr. Gebru point out the ethical flaws of these technologies, they are attacked, discredited, and discarded with impunity. The inevitable conclusion is that AI Ethics initiatives by big tech are designed to make problematic tech more palatable and they are used merely as a smokescreen to hide their transgressions.

Earlier this year, the AI Ethics world rejoiced as IBM left the facial recognition business and other tech companies signaled their willingness to follow suit. This glimmer of hope was due to the hard work of Black scholars like Dr. Timnit Gebru, Joy Buolamwini, and others. These past few years have been remarkable in the number of highly visible employee push-backs and protests against surveillance technologies used to track and incarcerate marginalized groups. However, as media attention waned and public pressure lessened, the tech companies have gone back to business as usual and are again courting deep-pocketed government agencies with new lucrative contracts for the same malevolent technologies.

Dr. Gebru’s firing is the latest in a series of efforts by big tech to squelch dissent within their ranks. Last year, Google allegedly fired multiple folks for worker activism and Meredith Whittaker, the co-founder of AI Now Institute parted ways with Google when all her paths to career progression at the tech giant were blocked after she led the employee walkout demanding structural change. Other tech giants are increasingly engaging in union-busting activities and there have been disturbing reports that Amazon has hired spies to surveil its workers and track labor movements.

With so many powerful forces working to suppress marginalized voices, how can we make any meaningful progress on AI Ethics?

We should start by protecting ethical whistle-blowers like Dr. Gebru and strengthen our labor laws so they protect the workers not the employers. The ‘hire and fire’ culture of tech industry is inherently dangerous as it enables abuse of workers at the hands of the powerful tech companies. Recently, the National Labor Relations Board (NLRB) filed a surprise complaint on Wednesday accusing Google of illegally surveilling and firing two workers who tried to form a union at the tech giant, while this sounds promising but resolution of these complaints takes a long time.

To get public support for such protections and resolutions would require clear articulation of the harms in a way that’s understandable by the public and can garner their support. This would mean dismantling and removing the stranglehold of AI Ethics gatekeepers in tech and academia. Inclusion of more marginalized voices in development and usage of AI technologies instead of only exploiting them for labor and data. Force big tech to share the resulting benefits with everyone instead of restricting access to the wealthy and privileged.

Technologies reflect the priorities and ethics of those building and funding them. We need to stop acting as though these technological outcomes are somehow separate from the environments in which technology is built. Elon Musk who forced workers to go back to work during the pandemic has now surpassed Bill Gates as the second-richest man. We need to dismantle the incentive structures designed to reward those benefiting from exploitation of workers and stop glorifying hoarding of wealth as if it were some heroic accomplishment. There is an urgent and critical need to divert resources to technologies that benefit humanity over the bottom line. We need to nurture alternative funding sources so that AI Ethics research doesn’t become the pet project of some billionaire or the redemption for big tech.

For those who are suggesting suggest that Dr. Gebru should just find another job are missing the point. When there are no safeguards for highly credible well-known scholars for speaking up against unethical misdeeds, what hope is there for other lesser-known voices from marginalized communities? Even if Dr.Gebru and others were to leave Google, given the dominance of this industry by a handful of tech companies who have become more powerful during the pandemic, the number of opportunities is shrinking very quickly.

Audre Lorde said, “For the master’s tools will never dismantle the master’s house. They may allow us to temporarily beat him at his own game, but they will never enable us to bring about genuine change.”

While it may seem naïve to try and change a powerful company from the inside, having ethical voices from marginalized and minoritized communities with a strong moral compass inside these companies and institutions may at the very least, slow the onslaught of questionable technologies and buy us some time to collectively figure out other sustainable solutions.

Dr. Gebru and others are the last line of defense in our quest for ethical and inclusive tech. If we don’t stand up for them now, soon there will be no one left to fight for us.

--

--

Mia Shah-Dand

Responsible AI Leader, Founder - Women in AI Ethics™ and 100 Brilliant Women in AI Ethics™ list #tech #diversity #ethics #literacy