Increased transparency about how countries use AI to manage migration needed, new study shows

Any overuse of AI in migration management may perpetuate biases and errors, promoting excessive reliance on technology and undermining trust in decision‑making processes
Increased transparency from countries about how they use AI to manage migration is needed to boost trust and strengthen the rule of law, a new study says.
Any overuse of AI in migration management may perpetuate biases and errors, promoting excessive reliance on technology and undermining trust in decision‑making processes, an expert has warned. Adequate cybersecurity measures are also needed to protect sensitive data about vulnerable migrants.
However, using AI for migrant management could present opportunities such as freeing up caseworkers’ time to focus on other critical areas, if it is done in a responsible way where potential risks are adequately identified and avoided or mitigated.
The research, by Professor Ana Beduschi from the University of Exeter, emphasizes the importance of improving how countries use AI in migration and the importance of adhering to international human rights law.
States should ensure that AI is used responsibly and in a manner that respects the rights and dignity of migrants throughout the different phases of the migration process.
Governments use AI technologies, including generative AI, to streamline workloads and increase efficiency in migration processing.
However, not all countries have publicly acknowledged whether and, if so, how they use AI in international migration management.
The study says countries should publicly acknowledge their use of AI without necessarily revealing sensitive details that could compromise national security or personal information. This includes information about which AI systems are used, for what purpose, and whether – and the extent to which – they involve human input and assistance.
Professor Beduschi said: “Increased transparency would help to increase people’s acceptance of AI in public services. Transparency can also lead to better accountability, ensuring that decisions are justified and in line with the rule of law. Even in sensitive areas, such as migration, where matters may be closely related to national security imperatives, public authorities should be accountable for their decisions and actions.”
In using AI to regulate migration, states would still need to comply with international human rights law, including the rules regarding the right to privacy and the guarantee of non‑discrimination.
Professor Beduschi has produced a risk matrix which can be used to identify, prioritize, avoid and mitigate risks. The framework helps states to use AI in international migration responsibly.
It encourages them to actively and thoroughly assess whether AI systems, including generative AI, could potentially cause harm or worsen existing situations for migrants and their communities.