Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Logan Paul: Following the YouTube controversy, should social media have the same regulations as journalism?

Unlike journalists, social media users are not restricted in what they can and cannot publish, and their content is not scrutinised by a regulatory body. But should it be?

Ruth Coustick-Deal
Sunday 11 February 2018 11:51 EST
Comments
The Logan Paul case raises questions about what internet giants will, and won’t, do to the users that bring in the most clicks
The Logan Paul case raises questions about what internet giants will, and won’t, do to the users that bring in the most clicks (Getty)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The laws that define websites like Facebook as free from responsibility for the content posted on them have long been seen as what makes the internet’s growth and accessibility possible. That freedom, for the first big online communities, like LiveJournal and Myspace, was the freedom to invite everyone on the internet to have a voice. But as those rules come under closer scrutiny, the confidence in them appears to be crumbling away.

To understand why, consider just one case: Logan Paul. At the end of 2017, this YouTuber filmed a suicide victim in Japan’s Aokigahara forest. While journalist ethics prohibit this kind of behaviour, there are no such standards or guidelines for YouTubers, who have the freedom to post, and monetise, whatever content they choose. There was a public demand that YouTube respect the family, behave ethically and take down the content. However, this was eventually done by the original creator instead of the major website.

One of the reasons the site comes under fire is because, in these situations, they don’t follow any rules consistently. YouTube does have clear guidelines about gory or graphic content – but didn’t enforce them in the Logan Paul case. To take down such a high profile creator would have surely opened them up to furious accusations of censorship.

Twitter, notorious for the levels of abuse on its site, is another example. It often allows obvious antisemitic hateful tweets to stay up, but blocks accounts that upset the “blue tick” verified celebrities. This is once again all about who brings in the views, the clicks and the money.

In response to these issues, the phrase “platform responsibility” is now being thrown around by academics, campaigners, governments and even the United Nations. All of the above want to see some accountability from Facebook and Google for the ways the sites have profited from objectionable content. The question is: in what shape? And would that accountability break the web?

Online businesses benefit from the current framework of law in Europe and in the US, which establishes that platforms are not legally liable for the content posted on them. These laws have long been seen as essential for a functioning internet. The logic goes, we shouldn’t make individual websites liable, because for them to proactively enforce it would be even more extreme and over the top censorship, to give themselves cover. That regulatory freedom is also the promise of a low-barrier entry to all other web startups, so they too can avoid legal fees. It’s the promise of a “new Facebook” – something which seems less and less likely every day.

These frameworks (Section 230 of the Communications Decency Act in the US; the E-Commerce Directive in the EU) do establish that sites where users create the content, from Airbnb to Reddit, must play a “passive” role to have these protections. They are treated like telephone lines – providing a neutral service. They are not allowed to interfere in how content is shown, and must obey requests to take down illegal content.

But of course, this doesn’t ring as true as it once did. All of these sites play an active role in deciding what its users actually see.


Twitter often allows hate speech to remain online, but blocks users that offend ‘blue-ticked’ celebrities (Getty)

 Twitter often allows hate speech to remain online, but blocks users that offend ‘blue-ticked’ celebrities (Getty)
 (Loic Venance/AFP/Getty)

Hate speech is illegal in every EU country, but platforms serve as their own judges of what that looks like. YouTube and Facebook have, after a more “hands-off” approach in their earlier years of operation, recently attempted to take a limited editor role, using a combination of two systems to deal with “illegal” content. Both working solutions are currently still harmful to the web, and more importantly, to people.

The first system is using human beings: low-wage workers employed to monitor flagged content (which is likely to be disturbing), in order to decide if it is permissible or not. A particularly horrific 21st-century job, which has been widely shown to be traumatic to all those involved.

The second system uses robots. Content detection is automated using algorithms: a system filled with flaws that at its best perceives dunes as nudes, and at its worst removes people discussing the racism they’ve received, because they quote their own abusers. Artificial intelligence has been deployed to generate terrifying new forms of content with far greater effectiveness than it has in detecting problematic material uploaded to the same platforms.

Facebook and YouTube also hold on to their “passive” status by saying it is an algorithm that makes the calls about presentation and favoured content. However, they built their algorithms, continue to adapt them strategically and programme it with cultural bias, such as the choice to de-monetise LGBT+ content – even a wedding video of two wives drinking milkshakes was hit by this. These opaque systems allow large corporations to be the judge and jury of speech.

These companies have spent years making strategic decisions to focus on the demands of the most privileged in their user base, and ignoring the increasing numbers of people seeking respite from abuse and hate speech, while simultaneously promoting themselves as the new vanguard of participatory democracy.

They’re some of the most powerful corporations on Earth, with more users than the population of several countries, and their conduct increasingly does not fit under the definition that most conveniences them. If these sites already intervene in the shape of speech online, then governments feel confident asking them to simply increase the amount of work they’re already doing.

In Europe, legislators are changing the regulation of the web entirely.

Germany introduced a new hate speech law last year, which asks Facebook to take down content within 24 hours of receiving a complaint – a system that was introduced in direct response to Facebook failing to take action on existing laws, after a spate of defaming content against refugees.

The UK Government is now considering redefining sites like Google and Facebook as publishers, so that they would fall under the jurisdiction of existing regulators. In OpenMedia’s campaign to “Save the Link”, we’ve been fighting against one such intervention in the EU. The European Commission is pushing for content filtering, which would mean posts are blocked before they are uploaded if any copyrighted content is detected.


David Kaye, the UN’s special rapporteur on freedom of expression, said censorship tools ‘often risk over-regulation’ 

 David Kaye, the UN’s special rapporteur on freedom of expression, said censorship tools ‘often risk over-regulation’ 
 (UN)

This would be a new condition to maintaining that freedom from liability. It covers copyright infringement, but the impetus is there to expand default filtering as the answer to all the “bad things” on the web, from hate speech to “fake news”.

David Kaye, the UN special rapporteur for freedom of expression, declared that the EU is going down a dangerous road with this increasing trust in automation as the solution to the ills of the net, noting “the tools used often risk over-regulation, incentivising private censorship that could undermine public debate and creative pursuits”.

Similarly, Professor Daphne Keller says “the idea that automation can solve illegal content problems without sweeping in vast amounts of legal content is fantasy”.

What we are now seeing is the predictable outcome of years of platform negligence and indifference to hate speech and abusive content online: governments are stepping in, proposing heavy-handed regulatory responses or legislative changes not fit for purpose and ripe for misuse. Even in the USA, courts are starting to pass judgements that water down the power of Section 230, their version of this law.

However, rather than trying to change the rules to demand more censorship in order to be classified as “neutral”, we need something absolutely new. A kind of regulation that works specifically to restrain the power of the giants of the web, and prevent them from profiting from hate speech.

Some proposals that experts and politicians have put forward to solve the problem of what responsibility can look like, without breaking the web, include:

  • The UK Home Affairs committee report on Hate Crime proposed that content platforms pay police to respond to the death threats on their platforms, just as football stadiums pay for policing on match days
  • EFF have proposed, among many suggestions, better staffed and more diverse teams to deal with abuse who understand language and cultural contexts
  • Others, like the Open Markets Institute, suggest an anti-monopoly approach to increased responsibility: banning any further acquisitions by Google and Facebook until they have clear strategies for managing the volumes of hate speech. They also propose banning bots, or rules that say bots must be labelled as such to slow down the supposed “viral” nature of orchestrated hate campaigns
  • The chair of the media regulator Ofcom recently proposed defining Google and Facebook as publishers, which would put them under the scrutiny of the regulator
  • Joe McNamee from European Digital Rights proposed in a recent paper that we need targeted laws that focus on narrow types of content, and specific online industries. This approach might work for those campaigning that Airbnb stop claiming they are just a “neutral platform”. Many think they must refuse listings unless basic safety checks are carried out on properties, after multiple deaths in listed properties

Now that content creation is on such a huge global scale, established rules are reaching a breaking point, because there is such a volume of hate speech and death threats, all of which bring attention, eyeballs and ultimately cash to these huge businesses. All of which have a chilling silencing effect on people who are afraid to contribute online because of the vitriol they will receive.

It’s exciting to know that so many people are working on making the web a better place. Of course, the risk of these disastrous laws is real. It is difficult to envision an effective response that is not fundamentally at odds with the business model of companies that thrive on an unhealthy “attention economy” of monetised hate and outrage.

We need broad consensus on what best practice is that doesn’t hand more power to those who already have a monopoly on online speech. We are in need of a solution that would make the web bearable for oppressed people.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in