New laws around police use of AI technologies advised
The study has recommended statutory codes of practice be introduced to provide clarity and safeguards.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The development of new laws on how artificial intelligence (AI) is used in the police force has been recommended in an independent report.
The use of emerging technology, such as facial recognition and predictive policing – where computer systems analyse large sets of data to help decide where to deploy police – was assessed in a review made in partnership with Edinburgh Napier University and the Scottish Government.
The study, published on Wednesday, has recommended statutory codes of practice be introduced to provide greater clarity and safeguards around the future use of developing AI applications.
While the report did not find significant legislative gaps around police use of emerging technologies, it suggested new laws may be needed for autonomous security robots, if their use for enforcement purposes is considered in future.
The proposal is one of 18 recommendations made by the Independent Advisory Group on Emerging Technologies in Policing.
The review also looked at how innovation in some areas, including electronic databases and biometric identification and surveillance, could impact on human rights, ethical best practice and public confidence in policing.
It recommended the inclusion of “an ethical and human rights impact assessment” in any business case for new technology, and advised Police Scotland to publicly share the legal basis for using it, and a relevant complaints process.
Justice Secretary Keith Brown said: “It is important that in adopting new technology, Police Scotland must do so in a way that secures public confidence.
“That is why a robust rights-based, ethical approach to using new technology is so vital.
“This valued report makes important suggestions in this significant and interesting sphere which will be given careful consideration.”
The advisory group was chaired by Professor Liz Aston, director of the Scottish Institute for Policing Research based at Napier.
She said: “We believe this report, which draws together a wide range of expertise, provides a platform for policing bodies to adopt innovation in a way which retains public confidence and delivers social justice.
“New technologies are developing all the time and have the potential to play a key role in investigating and preventing crime – but it is crucial that any unintended consequences are understood, evaluated and addressed.
“I hope the work we have done over the last two years will be useful to other areas of policing here and around the world.”
The advisory group was established in 2020 in response to a report by the Scottish Parliament’s Sub-Committee on Policing about Police Scotland’s use of digital triage devices, known more commonly as “cyber kiosks”.
Then justice secretary Humza Yousaf suggested the formation of an independent group to monitor wider technological developments in policing.