A groundbreaking paper titled, “On Artificial Intelligence – A European Approach to Excellence and Trust” was released in 2020. The paper sparked debates on regulation, technology, and competition. However, one crucial aspect went unnoticed – the urgent need to prevent AI from perpetuating prohibited discrimination.
Fast forward three years, and we have the rise of ChatGPT and an ever-increasing investment in artificial general intelligence (AGI).
Addressing Gender Bias in AI
As AI seeps further into every aspect of business and society, we cannot afford to ignore the gender conversation. Failing to address this issue risks entrenching discrimination and bias in our systems. Achieving this requires a thorough examination of the foundations of AI development, the data it relies on, and how we identify and tackle bias within its code.
Women Campaigning for Equality
Gabriela Ramos, Assistant Director-General for the Social and Human Sciences at UNESCO, raised the issue during the World Economic Forum (WEF). Ramos highlighted a critical concern – the exclusion of women at every stage of the AI lifecycle, leading to a gender gap that poses a significant risk: the creation of an immensely unequal economic and technological system in an era of rapid digitalization.
Statistics Prove Gender Gap in the AI Workforce
The statistics surrounding this issue are alarming: male graduates in ICT out number women by 400%. Women represent merely 33% of the workforce in large global technology firms, and only 22% of women are in the AI profession.
Women authors contribute to 14% of AI research papers, and women-led firms receive a paltry 2% of venture capital funding. These figures underscore the urgent need for action to address gender bias in AI and promote inclusivity and diversity in the field.
When looking at the abovementioned figures, the imminent reality of gender bias is difficult to ignore. The statistics are evidence enough and raises legitimate concerns about the potential reflection of this bias in AI systems.
We must acknowledge the existence of this bias, recognise the significant risk it poses in further entrenching biased practices, and unite in our efforts to actively address and mitigate these biases.
Women Raising Red Flags
Several women are raising awareness about the matter. In an article published in the Gender, Technology and Development journal, Subadra Panchanadeswaran, a professor at the Adelphi University School of Social Work, and Ardra Manasi from the Centre for Women’s Global Leadership at Rutgers University, thoroughly explores the biases surrounding gender in AI.
In the article they emphasise the need for ethical frameworks and the inclusion of gender equality in the development of AI, government policies, and overall approaches to equality. Additionally, Anu Madgavkar, shares a similar sentiment in a McKinsey analysis.
The Implications of Male-Centric Data
Caronline Criado Perez also describes the adverse effects on women caused by gender bias in a big data collection, in her book titled “Invisible Women: Exposing Data Bias in a World Designed for Men”.
In her book she draws attention to the alarming statistic that women are 47% more likely to sustain serious injuries in the same car accident as a male, solely due to the seatbelt designs that are tailored according to male-centric data.
Considering that this bias could be replicated across various layers of society, manifesting in a manner of different ways, this is a startling reality that must be considered as the implications may have drastic consequences.
By Anna Collard SVP of Content Strategy & Evangelist at KnowBe4 Africa