AI can support humanitarian organisations in situations of armed conflict or crisis – but they should understand the potential risks, study warns
AI can help humanitarians gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation. But deploying systems in this context is not without risks for those affected, a new study warns.
Humanitarian organisations have been increasingly using digital technologies and the Covid-19 pandemic has accelerated this trend.
AI-supported disaster mapping was used in Mozambique to speed up emergency response, and AI systems were used to predict food crisis and rolled out by the World Bank across twenty-one countries.
But the study warns some uses of AI may expose people to additional harms and present significant risks for the protection of their rights.
The study, published in the Handbook on Warfare and Artificial Intelligence, is by Professor Ana Beduschi, from the University of Exeter Law School.
Professor Beduschi said: “AI technologies have the potential to further expand the toolkit of humanitarian missions in their preparedness, response, and recovery.
“But safeguards must be put in place to ensure that AI systems used to support the work of humanitarians are not transformed into tools of exclusion of populations in need of assistance. Safeguards concerning the respect and protection of data privacy should also be put in place.
“The humanitarian imperative of ‘do no harm’ should be paramount to all deployment of AI systems in situations of conflict and crisis.”
The study says humanitarian organisations designing AI systems should ensure data protection by design and by default to minimise risks of harm – whether they are legally obliged to do so or not. They should also use Data protection impact assessments (DPIAs) to understand the potential negative impacts of these technologies.
Grievance mechanisms should also be established so people can challenge decisions that were either automated or made by humans with the support of AI systems if these adversely impacted them.
Professor Beduschi said: “AI systems can analyse large amounts of multidimensional data at increasingly fast speeds, identify patterns in the data, and predict future behaviour. That can help organisations gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation.
“Yet, deploying AI systems in the humanitarian context is not without risks for the affected populations. Issues include the poor quality of the data used to train AI algorithms, the existence of algorithmic bias, the lack of transparency about AI decision-making, and the pervading concerns about the respect and protection of data privacy.
“It is crucial that humanitarians abide by the humanitarian imperative of ‘do not harm’ when deciding whether to deploy AI to support their action. In many cases, the sensible solution would be not to rely on AI technologies as these may cause additional harm to civilian populations.”