Labour commits to introducing AI regulation for tech giants
The party’s manifesto says it will introduce “binding regulation” on the safe development of AI models.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Labour has said it will introduce “binding regulation” on the biggest artificial intelligence firms to ensure the “safe development” of AI if it wins the General Election.
In its manifesto, the party said it would target the regulation at the “handful of companies developing the most powerful AI models”.
Labour said it would also ban the creation of sexually explicit deepfakes, and pledged to create a new Regulatory Innovation Office which it said would help regulators across sectors keep up with rapidly evolving new technologies.
It said regulators were currently “ill-equipped” to deal with such advances, which often “cut across traditional industries and sectors”.
The new office would help regulators “update regulation, speed up approval timelines and co-ordinate issues that span existing boundaries”, Labour said.
This is in contrast to the Government’s approach during the last parliament, which chose to use existing regulators to take on the role of monitoring AI use within their own sectors rather than creating a new, central regulator dedicated to the emerging technology, which it said was a more agile and pro-innovation approach.
As part of that approach, in February, the Government pledged to spend £100 million on AI regulation, including on upskilling regulators across different sectors on how handle the rise of AI.
And speaking in November last year, Prime Minister Rishi Sunak said that while “binding requirements” would likely be needed one day to regulate AI, it was currently the time to move quickly without laws.
Last month, a number of world-leading AI scientists called for stronger action from world leaders on the risks associated with AI, and said governments were moving too slowly to regulate the rapidly evolving technology.
In an expert consensus paper published in the Journal Science, 25 leading scientist said more funding was needed for AI oversight institutions, as well as more rigorous risk assessment regimes.