Sana Khan l Palladium - Mar 08 2023
How Biases in AI Technology Has Made Gender Inequality More Visible

Dr. Sana Khan is a Senior Technical Advisor in Data Science and Climate at Palladium. She previously worked with NASA developing forecast models for rainfall-triggered landslides.


This year’s International Women’s Day’s theme is ‘DigitALL: Innovation and technology for gender equality.’ According to the UN, 37% of women globally do not use the internet and 259 million fewer women have access to the internet than men. This isn’t unusual; women in developing countries are often disproportionally affected and more vulnerable across all sectors.

That we can even measure this disparity is thanks to vast improvements in technology and our ability to collect localised data. We clearly understand that there is a gap that disproportionately affects women and in turn, equality, and women’s empowerment.

One way to better understand how this negative impact plays out is through the example of Artificial Intelligence (AI) / machine learning (ML) models developed to predict risks for the general population. Globally, but particularly in developing countries where access to data about women is limited, AI/ML findings are not always equitable.

The impact of under-representation in AI/ML models can be significant in fields from healthcare to finance and criminal justice. It can result in biased predictions that can adversely affect people's lives. For example, a study found that a commonly used algorithm for predicting which patients will benefit from extra medical attention was less likely to recommend extra care for women than men, despite similar medical histories.

To address this issue, it is essential to ensure that training data is diverse, and accurately represents the population being studied. Additionally, there must be processes in place to detect and correct biases in the development of the models.

AI/ML models are data hungry, so ultimately, the answer lies in feeding them better, more equitable data and evaluating any biases within the models, especially if they are focused on areas where specific cohorts, like women, have been underrepresented in the data. But knowing the problem isn’t enough; we must also train our models using local data with better representation because we cannot generalise those models, especially in areas already disproportionately represented when it came to gender.

Much like the development of any technology, the training and testing of AI/ML models requires the involvement of women. This ensures that their perspectives and experiences are taken into account during the models’ design and implementation. Moreover, it helps to detect possible sources of bias and ensure that the model is relevant and useful for women or other marginalised groups.

But the burden to solve the problem isn’t only on women or marginalised groups – it’s a whole sector issue and should be addressed as such. Whether it’s ensuring there’s diversity in the data collected for AI, developing algorithms that reduce gender bias and testing for that bias, or increasing the number of women or marginalised groups working in AI research, there’s no one way to solve the problem.

Technology, as always, is only as good as the thought or data behind it, and if we’re to move forward towards a more equitable world, we must build our technology with equity in mind. Ensuring women have access to technology is critical, but when we begin using AI/ML to make big decisions for large swaths of people with biased data, the repercussions can be steep and can have significant negative consequences, both for individuals and society. If we’re to make good on the promise of ‘DigitALL’ then eliminating gender biases from our AI/ML models and closing the digital gender gap are critical steps towards doing so.


For more, read 'DigitALL: Bridging the Digital Gender Divide' or contact info@thepalladiumgroup.com.