Katharina Cavano l Palladium - Jul 29 2022
Palladium Teams up with Carnegie Mellon University and the World Food Programme to Win Equitable AI Data Challenge

From facial recognition software to autofill in a Google search tool, artificial intelligence (AI) has become an everyday norm. But as these technology progress, so do concerns about the biases that have been built into our tools and how they may affect the people that use them.

These concerns led Palladium’s Jonathan Friedman to examine those biases and what they could mean within his work internationally. “As the conversations progress around the ethical challenges of AI, I found there was a gap in being specific about what we mean and how we define bias, especially in the context of work in the developing world,” explains Friedman, Senior Technical Advisor, as his team celebrates the win of USAID’s Equitable Gender Artificial Intelligence (AI) Challenge.

The Challenge called for innovative approaches to help identify and address actual and potential gender biases in AI systems used in global development work. The hope is that these approaches would increase the prevention, identification, transparency, monitoring, and accountability of AI systems so that they don’t produce results weighted unfairly towards one gender or another.

As Friedman adds, the crux of the team’s pitch was to better define bias and fairness in AI. “The jumping off point is defining a fairness goal for each project and understanding what fairness means within each project before beginning the work or applying any AI.” He explains that there’s a very active conversation around AI but there’s something missing. “Those conversations aren’t built on a common foundation of terminology or common understanding of what bias and fairness means in development project applications.”

He notes that much of the work that’s done around AI ethics that’s specific and quantifiable is done by westerners in a western context on projects such as facial recognition that aren’t necessarily central issues or relevant in a development context, which makes it difficult to apply it within development programs. “USAID talks a lot about combatting bias in development work, but little about the challenge of machine learning or AI bias in the development context.”

“Overall, AI bias has been a topic of concern in the last five or six years, but it’s been very shocking to see very few or any examples in international development work,” adds Anubhuti Mishra, a Senior Technical Advisor on the team. “It’s exciting that this work would be one of the first such projects of developing a tool and building use cases for global health and international development programs specific to food access.”

Teaming Up for Success

In the second phase of the Challenge, Friedman and the team joined forces with Carnegie Mellon University and the World Food Programme (WFP) to create an Artificial Intelligence Gender Fairness Decision-Support Tool. The tool will be a compendium of products that include a conceptual framework, training module, and code toolkit that have been vetted and refined through real-world applications. “Carnegie Mellon has the tool to work on the conceptual framework and Palladium and WFP have the programs on the ground where we can actually test the concepts,” adds Friedman.

“This is a fantastic collaboration between Palladium and some of the leading organisations in this field,” explains Mishra. “We have Carnegie Mellon, which has been doing a lot of work in the area of AI bias and building equitable AI tools, giving us an extraordinary technical partner, and we have WFP, which has amazing access to field partners and is already doing work in the area of development and making AI more equitable.”

For Friedman the goal with the project isn’t just to develop something that can be implemented on the ground, but to contribute to the conversation around AI and fairness. "There is growing debate about the wider adoption of AI in development projects and whether it will reinforce or amplify existing biases. We hope this work will inform this debate by providing empirical examples of how bias has been articulated, identified, and mitigated, on AI projects implemented by this consortium."

Mishra explains that the tool is a guide for project managers and data scientists who are either building machine learning or AI models or project managers making use of them, to understand how to define fairness or biases in their project. “We want them to understand the importance of defining fairness early on in their project and provide a training with use cases to deeply understand how throughout the lifecycle of a project, biases can be measured and in turn addressed.”

The win is just the beginning. With funding from the USAID grant, the team have one year to design, test, scale and socialise the tool. “I would hope that the tools and methods that we’re developing will be adopted, but also that we can ground the conversation in the sector in our very real experiences and work,” concludes Friedman.


For more information contact info@thepalladiumgroup.com.