Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Labour would make training AI to spread terrorism a criminal offence – Cooper

In a major speech on Monday, the shadow home secretary will set out the party’s approach to national security, including action to stop online radical

Nina Lloyd
Sunday 16 July 2023 17:30 EDT
Shadow home secretary Yvette Cooper will outline Labour’s approach to national security in a major speech (PA)
Shadow home secretary Yvette Cooper will outline Labour’s approach to national security in a major speech (PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A new law aimed at preventing extremist groups from training artificial intelligence (AI) chatbots to spread terrorism would be introduced under a Labour government, Yvette Cooper will say.

In a major speech on Monday, the shadow home secretary will set out the party’s approach to national security, including action to stop online radicalisation.

Ms Cooper will argue that while encouraging terrorism is a criminal offence under existing legislation, it is harder to establish culpability when AI is being used as a tool for promotion.

She will announce that Labour would close the “loophole” by criminalising the deliberate training of chatbots to radicalise vulnerable people.

The party would work closely with the intelligence community towards preventing malicious actors using the technology for such purposes, Ms Cooper will pledge.

Speaking at a Royal United Services Institute (Rusi) defence and security think tank event, she is expected to say: “Artificial intelligence creates new opportunities for Britain, including for law enforcement, but it also presents significant new threats and risks.

“A series of recent cases have revealed the potential for online chatbots to be used to radicalise people with pro-terror content.

“Our law enforcement and legislation must not be outpaced by terrorists and extremists using new technologies to prey on vulnerable people.

“That’s why Labour will criminalise those who purposely train chatbots to spout terrorist material, with stronger action to monitor and stop radicalising chatbots that are inciting violence or amplifying extremist views.”

In her speech, the shadow minister will call on the Government to include action to tackle the deliberate misuse of AI and the rise in online radicalisation in an update on counter-terrorism strategy expected this week.

But a new cross-government strategy for state threats is also needed to run alongside the UK’s plan, she will argue.

Ms Cooper will point to Boris Johnson’s meeting with former KGB officer Alexander Lebedev while he was foreign secretary as an example of behaviour which showed disregard for the importance of national security.

Mr Johnson has said officials were aware in advance that he was attending the house of Evgeny Lebedev and said his contact with the newspaper owner’s father at the event was not “formal” or “pre-arranged.”

Labour’s announcement comes after the sentencing of Jaswant Singh Chail, a former supermarket worker who planned an attack on the Queen at Windsor Castle after being encouraged by his “AI girlfriend”.

The Government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has previously highlighted some of the legal challenges posed by the potential use of AI for radicalisation.

In a report in June, he said: “It is unclear how legal culpability would be established in a scenario where an individual was radicalised (in part) by an AI system.

“Although no case currently exists for revisiting terrorism legislation, the information-gathering capability of AI large language models, and the possibility of truly autonomous target selection, mean that these laws need to be kept under close review.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in