As we move into the latter half of the decade, the world faces an unrelenting surge in humanitarian crises. Natural disasters, forced migration, and conflicts are predicted to intensify, with the United Nations forecasting a significant rise in the number of people requiring humanitarian assistance. The World Meteorological Organization has highlighted the increasing frequency of extreme weather events, while the Internal Displacement Monitoring Centre reports that over 76 million displacements were caused by disasters and conflicts in 2023 alone.
The complexity and urgency of these challenges demand innovative solutions, and artificial intelligence (AI) is emerging as a new tool. However, despite its potential for the humanitarian sector, experts warn against viewing AI as a panacea.
We spoke with Luminita Tuchel, the Humanitarian and Stabilisation Operations Team’s (HSOT) Artificial Intelligence and Machine Learning Lead about the outlook for AI in humanitarian applications. HSOT provides the UK Foreign, Commonwealth and Development Office (FCDO) with capacity and specialist expertise to effectively respond to disasters, crises, and complex emergencies around the world. Tuchel is leading the team who is using AI—machine learning (ML) and natural language processing (NLP)—to test and identify new analytical capabilities. For example, processing large volumes of data to support early warning analyses and humanitarian risk products. In addition, Lumi and the team also focus on applied responsible use of AI and developing adequate data safeguards.
Looking ahead, Tuchel advocates for informed experimentation, robust methodologies and ethical safeguards to ensure AI delivers meaningful impact without introducing new risks.
A Complex Landscape of AI Technologies
AI is generally used as an umbrella term that includes a suite of technologies such as ML, NLP, generative AI, including large language models (LLMs), and others. “AI is often treated as a monolith, but it’s important to unpack the technical methodologies used in different applications,” explains Tuchel. Generative AI tools, such as ChatGPT, have gained traction for engaging with data in user-friendly ways. However, these tools may often lack the precision required for critical tasks such as crisis early-warning analysis products. There are tools beyond generative AI that can provide better reliability in applications that require higher accuracy.
Humanitarian AI Applications
Tuchel outlines three primary areas where AI is currently being used among humanitarian organisations:
1. Talk-to-your-data applications: These involve AI tools to facilitate faster information discovery from internal shared drives or to process larger volumes of documents. Chatbots are also being developed to facilitate external access to information, for example to assist asylum seekers with updates on legal mechanisms, housing or healthcare in specific countries.
2. Forecasting and early warning systems: Predictive models are being developed to anticipate natural hazards and other events – such as floods or drought, disease outbreaks or conflict escalations. These predictive analyses aim to support humanitarian organisations with preparedness and anticipatory action efforts. However, uptake of these models’ outputs remains limited due to concerns around reliability, validation, and limited trust associated with -integrating this type of insight into decision-making processes.
3. Streamlining supply chain operations: AI is being tested to optimise aid delivery routes, improve resource allocation across distributed networks, and support scenario planning and simulations. Supply chains represent one of the main areas where organisations may see operational gains in the near-term.
Currently, much of the sector’s use of AI is internally focused, with organisations concentrating on their own operations. Few applications are being developed to test AI’s potential to foster collaboration and generate new collective insights. “One of the most persistent challenges in the humanitarian data space is fragmentation,” adds Tuchel. Investments are needed to make data “AI ready”—to better organise it, share it and manage it—and these investments will not only support AI adoption, but also automation and advanced analytics more broadly.
Applied Responsible Use
The adoption of AI in the humanitarian sector raises new ethical considerations. Responsible AI use requires alignment with humanitarian values and practically applying the humanitarian principle of ‘do no harm’. For example, understanding the type of data used in AI models helps develop adequate data safeguards. Sensitive data—such as biometrics or specific migration routes—can expose communities to harm if mishandled. “We need to be cautious about working with highly sensitive data; it requires stringent safeguards,” Tuchel emphasises. “Organisations at the beginning of their AI journey can start by using publicly available datasets such as weather patterns, conflict indicators, and other non-identifiable aggregate sources. In the near future and as adoption grows, organisations will need to prioritise investments in data security and protection.
A Long-Term Investment in AI Development
To harness AI’s potential, humanitarian organisations will need to invest in robust data infrastructure. Tuchel notes she is encouraged to see growing efforts across the humanitarian ecosystem.
This includes enhancing data collection, management, and analysis, and a deeper understanding of how different types of data (such as geographic and text-based information) can be integrated to generate richer insights.
Beyond predictive models, organisations can develop applications in descriptive analytics, which identify relationships and patterns within data. These insights could help provide a more comprehensive understanding of the crisis information environment.
However, there are practical limitations. “AI is expensive, and very few humanitarian organisations can afford investments not only in specialised skills, but also the tech architecture needed for AI models in production”. Tuchel explains. Additionally, AI systems require sustainable funding for model maintenance, reproducibility, and responsiveness to rapidly evolving technical capabilities.
Looking Ahead: A Measured Approach
Sweeping AI solutions will not materialise overnight, however, 2025 is set to bring incremental progress, with a growing emphasis on strengthening data fundamentals, such as infrastructure, data safeguards, and data lineage—the process of tracking the source, sharing, and transformation of data throughout its lifecycle, to support transparency and accountability.
By continuing ethically anchored research, organisations can build trust and create an environment that supports grounded AI adoption.
AI will not solve all humanitarian data problems, but it offers new tools to process growing volumes of information and create new analytical capabilities. As humanitarian organisations increasingly explore its potential, the focus must remain on collaboration, transparency, and above all, the people at the heart of humanitarian efforts.