Updated Online Safety Bill reaches Parliament promising to make internet safer
The Government has altered a number of areas of the long-awaited Bill in an attempt to improve safety while protecting free speech.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The Government’s long-awaited proposals for new internet safety laws that will require tech firms and social media platforms to prevent users from being exposed to harmful content have reached the key milestone of being introduced to Parliament after a number of major updates.
The Online Safety Bill, which has been in progress for around five years, will see Ofcom as a new regulator for the sector and with the power to fine companies or block access to sites that fail to comply with the new rules.
As the updated Bill is published, the Government has confirmed a number of areas in which the proposals have been strengthened on the previous draft published last year – including the power to hold company executives criminally liable if they fail to comply with Ofcom information requests just two months after the Bill becomes law, rather than the two years previously drafted.
In addition, company managers will now be held criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom and for obstructing the regulator when it enters company offices.
The revised Bill has also changed its approach to so-called “legal but harmful” content – material which is not itself illegal but could cause harm to users who encounter it.
Under the updated Bill, the biggest social media platforms must address this content and carry out risk assessments on the types of harms that could appear on their service and how they plan to address it, setting out how they will do this in their terms of service.
But the agreed categories of legal but harmful content will now be set out in secondary legislation approved by Parliament, which the Government says will not leave harmful content debates in the hands of social media executives or cause them to over-remove content over fears of being sanctioned.
Other updates include a new requirement to report child sexual abuse to the National Crime Agency.
The Government has also said news content will be exempt from any of the regulations as part of efforts to protect free speech.
The changes come after MPs, peers and campaigners warned the initial proposals failed to offer the expected user protection.
That has since sparked a number of other recently announced changes to the draft Bill, including bringing paid-for scam adverts into scope, requiring sites that host pornography to ensure their users are 18 or over and criminalising cyberflashing.
“The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms,” Culture Secretary Nadine Dorries said.
“Instead they have been left to mark their own homework.
“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving.
“Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age.
“If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.
“Since taking on the job I have listened to people in politics, wider society and industry and strengthened the Bill, so that we can achieve our central aim: to make the UK the safest place to go online.”
Damian Collins, chair of the Joint Committee on the draft Online Safety Bill, which scrutinised the previous version of the proposed rules said it was a “huge moment for the safety of all internet users”.
“The UK is leading the world with legislation to finally hold social media companies for the offences that take place on their platforms, like hate speech, fraud, terrorism, and child abuse,” he said.
“The Joint Committee on the Online Safety Bill set out a clear list of recommendations back in December, on how to make the Bill stronger, while also protecting freedom of speech and the freedom of the press.
“I’m very glad to see that the Government has adopted so many of our recommendations, ensuring we really will make the UK the safest place to be online in the world. The era of self-regulation for Big Tech has finally come to an end.”
However, some campaigners have expressed concerns about the ongoing use of the phrase “legal but harmful” in the Bill and the impact it could have on free speech.
Jim Killock, executive director of the Open Rights Group, said using the term amounted to the creation of a “censor’s charter.
“Unbelievably while acknowledging the sheer amount of power (Facebook executive) Nick Clegg and other Silicon Valley bigwigs already have over what we can say online, Nadine Dorries has created a bill that will grant them even more,” he said.
“The online safety bill will outsource decisions about what we can see online from British courts, Parliament and police to the terms of service documents of social media platforms drafted by Silicon Valley lawyers.
“‘Legal but harmful’ is a censor’s charter. Civil society groups have raised the warning, Parliament has raised the warning, the Government’s own MPs have raised the warning but the Government has ignored them all.
“Failure to remove it will ban Brits from doing normal things like making jokes, seeking help and engaging in healthy debate online.
“There are now lots of new and unworkable ideas tacked on at the last minute, making it a monstrous melange of fit to fail duties that will make minority groups less safe online.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.