AI that detects hate messages against women could also be turned on drug dealers
Researchers are working on a natural language processing model that could eventually help clear the police digital evidence backlog.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.AI being developed to detect aggressive hate messages against women could also be used to flag up texts exchanged by drug dealers.
Models have been tested for a year on dummy data by the national body for forensic science, the Forensic Capability Network (FCN), with studies suggesting the software could be 21 times faster than a human in finding threatening and abusive language about women.
In one test it found three aggressive and emotive phrases within 456 messages in just over a minute, claimed to be around 21 times faster than a human.
The lines it found in a fake conversation between two people were: “They’re a bunch of slags. You shouldn’t hang around with them”, “If I see or hear that you’ve been out with those f****** whores again both you and them will regret it. Do you understand me”, and “Don’t make me hurt you again bitch. You know what I’m like when I get angry.”
The natural language processing model flags up messages including obvious swear words and terms of abuse for women, but can also pick up more obscure terms like chad and femoid.
Femoid is used by incels – men who are involuntary celibates – to imply that women are non or sub-human, while chad means a sexually attractive or successful man.
Researchers from the FCN and the University of Warwick hope to be allowed access to real data from a UK police force in order to further refine the AI model.
The team believe a key next step could be training it to detect messages related to drugs crime.
The FCN’s lead scientist Simon Cullen said: “We’re some way off taking these models from a test environment into operational policing.
“But we’ve shown in theory that carefully customised AI models can operate in the background, and flag useful information that could be relevant to an investigation.”
There is currently a backlog of around 25,000 digital devices waiting to be examined as part of live police investigations in England, Wales and Northern Ireland.
Northumbria Police Deputy Chief Constable Jayne Meir, digital forensic lead for the National Police Chiefs’ Council, said: “Analysing huge amounts of data has become a crucial part of modern policing, and the volume of data is only growing.
“Our police investigators and digital forensic specialists will always be the ones to make decisions.
“But if we can help them analyse evidence faster with technology, then we should absolutely explore that.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.