Diversity and Inclusion in Artificial Intelligence
Discrimination and bias are inherent problems of manyAI applications, as seen in, for instance, face recognition systems not recognizing dark-skinned women and content moderator tools silencing drag queens online. These outcomes may derive from limited datasets that do not fully represent society as a whole or from the AI scientific community’s western-male configuration bias. Although being a pressing issue, understanding how AI systems can replicate and amplify inequalities and injustice among underrepresented communities is still in its infancy in social science and technical communities. This chapter contributes to filling this gap by exploring the research question: what do diversity and inclusion mean in the context of AI? This chapter reviews the literature on diversity and inclusion in AI to unearth the underpinnings of the topic and identify key concepts, research gaps, and evidence sources to inform practice and policymaking in this area. Here, attention is directed to three different levels of the AI development process: the technical, the community, and the target user level. The latter is expanded upon, providing concrete examples of usually overlooked communities in the development of AI, such as women, the LGBTQ+ community, senior citizens, and disabled persons. Sex and gender diversity considerations emerge as themost at risk inAI applications and practices and thus are the focus here. To help mitigate the risks that missing sex and gender considerations in AI could pose for society, this chapter closes with proposing gendering algorithms, more diverse design teams, and more inclusive and explicit guiding policies. Overall, this chapter argues that by integrating diversity and inclusion considerations, AI systems can be created to be more attuned to all-inclusive societal needs, respect fundamental rights, and represent contemporary values in modern societies.