Bias and Disinformation in the AI Age

Mia Shah-Dand
5 min readJun 10, 2024

--

Transcript from my talk at “Artificially Informed” summit hosted by Slow News on May 21, 2024 at the Fondazione Giangiacomo Feltrinelli, in partnership with The Ordine dei Giornalisti Lombardia (part of the Italian Journalists’ Guild).

This is my first visit to Italy. People are always surprised when I tell them that I’ve never been but as I like to say, I was saving the best for last. It’s wonderful to be here and thank you to Alberto for the invitation and Anna for their flawless organization.

I have a story to share with you. Some years back, before Covid, I was on a business trip to Japan. I met up with a college friend in Tokyo, who suggested we go to Roppongi. Some of you may already know, Roppongi is known for its nightlife and good place to go dancing.

At a club, I met a tall and handsome man. He was Italian. So, we started dancing together but there was one small problem. He didn’t speak a word of English and I didn’t speak a word of Italian. Still, we talked all evening — how? We used Google Translate.

It was a wonderful reminder of how technology can bring us together in powerful and magical ways. But technology itself is not magical or mysterious. Technologies like AI are designed, developed, and deployed by human beings. They are not built in a vacuum, and they reflect the values and priorities of people building them.

I have worked in Silicon Valley for many years, I have worked for companies like Google, I was a tech blogger, I hosted emerging technology meet up groups, and started my own tech consultancy Lighthouse3 over a decade back. For years I noticed that the people who received credit for building technologies like AI were men but those working on responsible and ethical AI were women, but they were not getting any recognition for it. In 2018, I published a list to recognize these women “The 100 Brilliant Women in AI Ethics” and since then every year we have published this list with 100 new experts and rising stars, we have built an online directory, hosted 100s of diverse speakers, to challenge the myth that men are the default AI experts.

Why does diversity matter?

Women represent half of humanity and majority of the planet doesn’t live in the western world — so any technology that doesn’t include their perspectives and doesn’t represent their needs is fundamentally flawed.

At Women in AI Ethics, we are changing the way we think about who an expert and what type of expertise matters. We are going beyond engineers and computer scientists, i.e. builders of AI, and including social scientists, policy makers, human rights activists, labor organizers, lawyers, and many others making sure these technologies are safe and beneficial for all of humanity.

You may already have heard from other experts that machine learning models are trained on large datasets. Many of the popular datasets are scraped from the internet and have historical biases embedded in them. For example, facial recognition systems are less accurate for dark skinned women. Images generated by generative AI reflect gender stereotypes such as women as nurses and men as doctors. Since we are at an event for journalists, let’s talk about how lack of diversity in AI affects today’s topic — news and information. AI poses many challenges but since I have limited time, let’s focus on 3 key challenges:

Is the information — factual or biased?

Are you an ethical person? Of course. We all believe that we are ethical, or we try to be ethical. Similarly, we are all also biased, but we try to be fair and balanced. Our biases are embedded in the machine learning systems, and biases influence the way journalists source information.

In movies and media, AI is frequent portrayed as white or male. White men are quoted as AI experts more often than women by the media. Except when it comes to AI assistants, which tend to be female like Alexa or Siri.

Similarly, when it comes to coverage of technology news, there is a tendency to quote the men who run tech companies, vendors and sellers of AI technology. It’s only recently that we are seeing growing coverage of the issues with AI such as the environmental impact and lack of consent for the content in training datasets. We need more inclusion of diverse perspectives. Journalists have an important role in deciding which voices matter and which voices are heard. By including lesser-heard perspectives of those impacted by AI, we ensure that the public gets a well-balanced view of important issues and not only the narratives of a privileged few.

Is the information — real or it is made up?

Machine learning systems are prone to “hallucinations.” It’s a cute way of saying AI systems fill knowledge gap with false or fictional information. Last year, there was the case of a professor being accused of sexual assault, which was a fabrication by the generative AI model. A lawyer presented AI-generated legal cases in front of a judge; these cases didn’t exist. In Texas, students were given a failing grade because their teacher wrongly suspected that they had used Generative AI for writing their papers. The list of failures goes on.

Much of online search is being replaced by AI results, very soon it will be impossible to ascertain what is or isn’t true as tools to identify machine-generated content are lagging. One way to protect against this is to go to the human source, verify AI-generated content, and check the provenance of the information provided — who created this content, what is the source? We need more investments in robust tools for fact checking and new regulations as AI-generated disinformation spirals out of control.

Last, who is on the other side of the screen — is it a real person or fake?

One benefit of having an old-fashioned encounter at a club in Roppongi is that I could verify that the person in front of me is a human being. There are no safeguards online. You don’t know who is on the other side of the phone call or if that person in the video is a deepfake. Voice cloning is increasingly being used for scams and deepfakes are being used to spread disinformation. As AI systems get more sophisticated and more widely used, it will get harder to tell the difference.

This is a wakeup call for all of us and especially the tech industry to move away from reckless AI and towards responsible AI. Instead of moving fast and breaking things, it’s time for the tech industry to move slow and build things that keep humans safe and help us thrive. At end of the day, AI is supposed to serve humanity, not the other way around.

Learn more about Women in AI Ethics at https://womeninaiethics.org/

--

--

Mia Shah-Dand

Responsible AI Leader, Founder - Women in AI Ethics™ and 100 Brilliant Women in AI Ethics™ list #tech #diversity #ethics #literacy