Human input boosts citizens’ acceptance of AI and perceptions of fairness, study shows
Increasing human input when AI is used for public services boosts acceptance of the technology, a new study shows.
The research shows citizens are not only concerned about AI fairness but also about potential human biases. They are in favour of AI being used in cases when administrative discretion is perceived as too large.
Researchers found citizens’ knowledge about AI does not alter their acceptance of the technology. More accurate systems and lower cost systems also increased their acceptance. Cost and accuracy of technology mattered more to them than human involvement.
The study, by Laszlo Horvath from Birkbeck, University of London and Oliver James, Susan Banducci and Ana Beduschi from the University of Exeter, is published in the journal Government Information Quarterly.
Academics carried out an experiment with 2,143 people in the UK. Respondents were asked to select if they would prefer more or less AI in systems to process immigration visas and parking permits.
Researchers found more human involvement tended to increase acceptance of AI. Yet, when substantial human discretion was introduced in parking permit scenarios, respondents preferred more limited human input.
System-level factors such as high accuracy, the presence of an appeals system, increased transparency, reduced cost, non-sharing of data, and the absence of private company involvement all boosted both acceptance and perceived procedural fairness.
Dr Horvath said: “Our results suggest resistance to the accumulation and sharing of citizens’ data—but we also show, in the context of other system-level characteristics, that citizens want working technology in which case are willing to forgo heavy human supervision.”
Professor Banducci said “Our results contribute to the understanding of technology acceptance in digital government and AI. Citizens who have a baseline resistance to new technologies in other contexts will prefer greater human administrative involvement.”
Professor James said: “Many routine interactions with the government involve permit applications similar to the kinds we examined such that the findings are of broad relevance to government services using AI.
“Respondents appeared to be more strongly influenced by the costs and accuracy of the technology than by concerns about “humans in the loop,” transparency, or even data sharing. This suggests citizens may perceive legitimacy more profoundly in terms of the system’s efficiency and its ability to deliver accurate and cost-effective results.”