3 Examples of Why Diversity & Inclusion are Important for AI Development

Artificial Intelligence technology has been gaining popularity and helping many people over the years. As AI algorithms become more popular and used more frequently, some concerning issues have come to light. Some algorithms have only been programmed for a specific population and exclude those who are underrepresented in the world of technology. This has caused the algorithms to perform very poorly when presented with a diverse population and leaves consumers disenchanted with the possibilities of AI technology. Companies should value diversity in Artificial Intelligence field because when AI algorithms are trained on diverse datasets they perform their best. Representation matters because diverse teams allow companies to see blind spots and create algorithms that are more widely applicable to more people and contributes to people wanting to use a particular algorithm. 

1) Low Diversity Datasets Leads to Bad Performance in the Real World

AI technology should be trained on a diverse data set so that when AI algorithms are introduced to a diverse population they will perform to the best of their ability. For example, a case that comes to mind is an algorithm that was trained to turn low-resolution pictures into high-resolution pictures. When the algorithm went public an issue was found. When the algorithm was introduced to pictures of people of color (African American, Asian, Hispanic etc.) the photo was turned from a person of color to a high-resolution photo of a white person.(https://techxplore.com/news/2020-06-ai-tool-low-pixel-realistic-images.html


The data set that the AI algorithm was trained on and the algorithm that it used came into question. Many people wondered how the issue went undetected until it went public. In testing, this issue should have been corrected by using different pictures of diverse people to make sure it worked for everyone. Denis Malimonov the creator of this algorithm and also a White Russian male, did not take into consideration the fact that it should be able to work on people of color’s photos as well.  If there was a diverse team that tested and created the AI algorithm they would have been more likely to test that the algorithm worked for all team members and therefore a more diverse population. If the creators don’t value diversity and do not make sure that the AI neural network is trained on a diverse data set and works properly for everyone in the population then it will leave various populations out. In the case of this algorithm, Mr. Malimonov did not value diversity and was embarrassed to see that the algorithm did not perform correctly when introduced to pictures of people of color. This is not an isolated incident; there have been several instances where artificial intelligence algorithms have failed or been corrupted because their training data set was not diverse.

2) Diverse Developers Create Brilliant Solutions

Another reason that diversity is important is, when people from different backgrounds come together they can solve more issues. Aroshi Ghosh of Berkley AI4ALL alum displayed how having a diverse mindset on world issues could benefit the population. Ms. Ghosh was interested in climate change as a young girl and how it contributed to natural disasters. She knew that many environmentalists and scientists usually are tasked with trying to help find a solution to help reduce environmental harm and wondered how she could help by incorporating artificial intelligence. She wanted to focus on the recovery of those who were affected. She wanted to solve the issue of long wait times for those requesting aid by phone. She noticed that the calls needed to be sorted into aid-related and non-aid related efficiently so that everyone received what they needed in a timely manner. She and a team of her colleagues helped to create an AI algorithm with neural language processing capabilities that could sort through aid-related and non-aid related calls for those who were affected by natural disasters. 

Instead of a team of those who are in the disciplines of science and environment only, with her knowledge of Artificial intelligence and language processing she was able to solve the issue. The issue at large was efficient recovery for those affected by natural disasters. Because she was asked to be involved with the recovery effort the solutions that were brought forth were more efficient. This shows the power of bringing those with a diverse mindset into the conversation in order to approach a problem from a different angle and provide novel solutions. By using this AI algorithm model aid calls can be efficiently sorted and people can get the help they need faster. If a team of interdisciplinary people who had a diverse approach to solving issues was not created this would have not been possible. 

3) Gay or Straight Classifier AI Algorithm Gave Misleading Results

Another thought to consider when creating and evaluating algorithms is what is the intended purpose of the algorithm. For example, Standford University created an AI algorithm that claimed to be able to determine who was gay or lesbian by evaluating their photos. This is very problematic when considering this algorithm could “out” people who do not want to be outed and cause the LGBTQIA+ community harm. This technology is based on pseudoscience that a person’s sexuality can be accurately identified by facial features. By doing this the creators must believe that sexuality is binary and not a spectrum. What is very concerning is that countries, where being a member of the LGBTQIA+ community is criminalized, can use this technology in order to detain and harm its citizens. Hate groups can easily perform hate crimes on people from the LGBTQIA+ community using this algorithm. This could also cause people to question their sexuality if they are misjudged by the algorithm. Before an algorithm is created, as a creator a person should consider the intended and unintended consequences of how it will affect all people in that community. https://theconversation.com/using-ai-to-determine-queer-sexuality-is-misconceived-and-dangerous-83931 

Not only does it benefit a company to have a more diverse workforce but it also benefits the consumer. Representation matters in all business situations and the field of artificial intelligence is no exception. When people see themselves represented in a product and that the people who made it had them in mind they are more likely to use the product. For information on how to evaluate your data set refer to this article.(https://arxiv.org/abs/2012.05345) This article explains different ways in which to evaluate if the data set used by your algorithm reaches various benchmarks. It also shows examples of companies that had algorithmic issues with biases that pin-point the issue in the data set and then remove that set from the algorithm to mitigate the issue.

But, it’s not enough for a company to just say that they are trying to create a more diverse work environment or an inclusive product. Companies should be dedicated to this mission. That is why, here at Traits AI we value creating diverse AI algorithms with different character traits and personalities so everyone can find an AI that they like and resonates with them. We also donate to AI4ALL, which is a non-profit company with the mission to create opportunities for underrepresented communities in AI and to equip people with the tools to pursue an artificial intelligence career. Valuing diversity in AI is not just saying that you support it, but also putting in the work to ensure that the company is creating opportunities for everyone to be a part of AI as we progress into the Age of Artificial Intelligence.

Leave a comment

You must be logged in to post a comment.