The landmark national consultation by CENTRIC (Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research) at Sheffield Hallam University reveals that a majority of citizens across the UK support the use of Artificial Intelligence (AI) in policing, together with a clear demand for strong accountability, oversight, and safeguards.
The survey consulted 10,114 citizens across all four UK nations, ranging in age from 14 to 85+ years.
The survey reveals positive perceptions of police use of AI with 61.3% of citizens supporting the use of AI by police and 64.0% believing that AI enhances police capabilities, compared to 12.4% who believe AI undermines them.
Respondents recognise key advantages to AI adoption in policing, especially for improvements for national security, increased police effectiveness in solving crimes, and more time and resources for the police to focus on high-priority crimes.
Despite overall support, the survey also identifies significant concerns. Core concerns are a lack of accountability, where police may blame AI for errors, concerns over algorithmic bias and inaccuracies, especially how such issues could affect specific groups, and online privacy, with concerns over potential oversurveillance or misuse of personal data. These concerns are also reflected in a significant 39.6% of respondents who would be unwilling to contribute personal data for the training of AI systems for policing purposes, compared to only 9.3% who would be very willing to provide their data.
The public’s views depend strongly on the specific AI application context. Comparing 32 AI deployment scenarios, the consultation shows that trust and acceptance of AI tend to be highest for situations in which AI deployment carries low risks for the public and in which AI use can reduce serious harms to vulnerable groups (e.g., children). Highest acceptance of AI use by police relates, for instance, to the identification of (potential) perpetrators of child sexual exploitation and terrorism. Acceptance is lowest for the automation of core policing functions, such as 999 emergency calls.
The findings further highlight that trust is conditional on the way AI is used: 54.9% trust the police to use AI in decision-support roles, whereas 36.0% trust AI to make policing decisions independently.
A core request emerging from the consultation is the public’s expectation for consistent oversight and accountability. 75.5% of participants want mandatory accountability processes before the police deploy AI, yet fewer than 20% believe sufficient accountability currently exists.
Prof. Babak Akhgar OBE, Director of CENTRIC, said: “AI can provide police forces with significant capability to fight crime and protect the public from harm. The public recognise the benefits of AI in policing and are supportive of AI deployments in many operational scenarios – but they also expect strong independent oversight and accountability.”
The consultation forms part of CENTRIC’s broader work on AI accountability since 2021, supporting police forces, government, and policymakers in ensuring that AI is deployed responsibly and in line with public expectations. The research has focused on three priority areas: strengthening operational practice within policing, shaping policy and governance through citizen-informed requirements, and reinforcing societal trust in AI. The consultation was conducted as part of the UKRI-funded project AIPAS (AI Accountability in Policing and Security).
Full versions of the National Citizen Consultation on AI in Policing - Highlight Report are available upon request.