What happens when you 'report abuse'? The secretive Facebook censors who decide what is - and what isn't abuse

Chris Green is the first reporter to be allowed into the Dublin office where staff must decide where freedom of speech begins and ends

Chris Green
Friday 13 February 2015 13:01 EST
Comments
The 'vast majority' of reports received by Facebook require no further action
The 'vast majority' of reports received by Facebook require no further action (Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The first thing that catches the eye of a visitor entering the lobby of Facebook’s European headquarters is the array of motivational posters stapled to the far wall. “Proceed And Be Bold”, orders one. “What Would You Do If You Weren’t Afraid?” asks another. “Done is Better Than Perfect”.

Stepping inside the gleaming, jagged glass building in Dublin’s docklands is like passing through a portal to San Francisco. Staff wearing hoodies and jeans and clutching Apple laptops negotiate their way through an obstacle course of low-hanging Frank Gehry lights, orange beanbags and ping pong tables.

But appearances can be deceptive. The joyful pop soundtrack in the canteen and the urban art adorning the walls belies the fact that some of the hundreds of people employed here are carrying out some very serious – and very sensitive – work.

To mark Safer Internet Day on Tuesday, The Independent became the first newspaper to be given access to Facebook’s Community Operations team: the men and women tasked with responding to reports of abuse by the site’s users. They are trained to cover everything from low-level spam all the way up to serious cyberbullying, hate speech, terrorist threats and suicidal cries for help.

Dublin is Facebook’s most important headquarters outside California. The Community Operations team based here does not just cover Europe, but also examines reports sent in by millions of users across the Middle East, Africa and large parts of Latin America. In the words of Sonia Flynn, the managing director of Facebook Ireland, they are “the front line between Facebook and the people who use Facebook”.

She adds that while the “vast majority” of reports received require no further action, when a serious concern is raised the team needs to act quickly and decisively. For this reason, a Community Operations person covering Spain cannot simply get away with speaking fluent Spanish – they must also have a good cultural knowledge of the country. Forty-four different nationalities are represented in the Dublin team alone.

“We put emphasis on hiring people from the different countries with the right language expertise and cultural understanding,” says Flynn. “When someone creates a piece of content – whether it’s a photo or a comment – there’s what’s said and what’s meant. That’s why it’s really important for us to have people who understand not just the language, but the culture of the country that they’re supporting.”

The offices of Facebook in Dublin, where the community operations teams is based
The offices of Facebook in Dublin, where the community operations teams is based

In the past, Facebook has been criticised for lacking the human touch in its interactions with its ever-growing army of users (at last count, there were 1.39 billion of them across the world). A notable example occurred at the end of last year, when American web designer Eric Meyer highlighted what he described as the site’s “inadvertent algorithmic cruelty”.

Mr Meyer had been invited to try out the site’s Year in Review feature – an automatically generated list of his Facebook “highlights” from 2014 – only to be confronted by a picture of his daughter Rebecca, who died earlier in the year. The product manager responsible for the feature later emailed him to personally apologise.

Content policy manager Ciara Lyden, who used to work on the Community Operations team, says she often saw first-hand how the public perceive Facebook. “Every so often I’d help someone out with a query that they had, and then they’d be like: ‘Thanks – if you’re a person or a robot, I don’t know’. I’d have to write back to tell them that I really am a person,” she says.

While she admits that Facebook could “do more” to show its human side, she points out that the site has to be built around the fact that it has more than a billion users who are online 24 hours a day. The company has put a lot of effort into what it calls “compassion research”, taking advice from academics at Yale’s Centre for Emotional Intelligence on how to help users interact with each other so they can resolve their differences without Facebook taking any action at all.

Previously, if one user said something that another found offensive, Facebook would simply look at whether the content broke any rules and if it did, take it down. Now it can act as a sort of digital counsellor, giving the offended person the chance to explain to the other why they were hurt.

Facebook could “do more” to show its human side (Getty Images)
Facebook could “do more” to show its human side (Getty Images) (Ed Jones/AFP/GettyImages)

Instead of just typing into a blank box, the offended user is given a set of possible phrases to describe how they feel. The choice depends on their age: a teenager will see words more likely to be in their vocabulary (“mean”), whereas an adult will see more sophisticated options (“inappropriate”, “harassing”).

The approach seems to be working, as in the vast majority of cases, the person responsible for the post deletes it of their own accord. “In the real world, if you upset me I’d likely go and tell you that you’ve upset me, and we’re trying to make that mirror what happens through our reporting flows,” explains Flynn.

However, the company is keen to stress that every single report of abuse is read and acted upon by a human being, not a computer – a fact that might surprise most users. The system is constantly monitored by staff based across four time zones in California, Texas, Dublin and Hyderabad in India, so there is never a “night shift” with fewer staff on hand.

When a user clicks "report", it is graded for its severity and guided to the right team. “If there’s a risk of real-world harm – someone who is clearly cutting themselves, or bullying, anything touching child safety in general, any credible threat would be prioritised above everything else,” says Julie de Bailliencourt, Facebook’s safety policy manager for Europe, the Middle East and Africa.

Although she says the company does have a set of response times by which it aims to help people, it will not make them public – or divulge how many abuse reports it receives overall. Security around the Community Operations team is also strict: to protect users’ privacy, a sign near where they sit reads “No Visitors Beyond This Point”.

Most reports are relatively benign. In Turkey, for example, every time the football team Galatasaray plays one of its big rivals, Facebook notices a spike in reports from supporters of both teams complaining about each other. The same is true of derby-day football matches in the UK.

Most Facebook reports are relatively benign (Getty Images)
Most Facebook reports are relatively benign (Getty Images) (Getty)

“People tend to report things that they don’t like, not necessarily things that are abusive,” says de Bailliencourt, who has worked at Facebook for five years. “It’s not like we can pre-empt things, but we know that during big sporting events we’re going to have an increase.”

News events also cause spikes in abuse reports. In the wake of the Charlie Hebdo terrorist attacks in France, a sudden surge in the number of controversial posts and heated debate resulted in more complaints. Or as de Bailliencourt puts it: “Anything that happens in the real world happens on Facebook at the same time.”

Very serious reports – such as someone threatening to kill themselves – are fast-tracked to the police or security services, she adds. “If we feel someone has taken some pills and they’ve posted on Facebook: ‘Goodbye world, that’s it, it’s the end’, we’re obviously not going to send them our usual supporting documentation – we need to go much faster.”

She adds that Facebook has “absolutely” saved people’s lives through swift intervention and has “equally as many good stories” as it does controversies, such as the row over its removal of breastfeeding pictures, which she describes as a “human mistake”. She says it is a myth that the more users that report something, the more likely it is to be removed. “One report is enough.”

Facebook has a clear code of conduct which users must respect, but there are always grey areas. Staff are also aware that a decision to remove something from the site – or leave it up – can be far-reaching, like a powerful court setting a global precedent. Under pressure, the multi-cultural team often has heated arguments.

“We don’t hire people to just press the same button X amount of times per hour,” says de Bailliencourt. “We hire people with very different backgrounds, and they sometimes disagree. It feels almost like the UN sometimes.”

Post deleted: Abuse reports on Facebook

Breastfeeding pictures

In 2011, Facebook deleted the page of a breastfeeding support group called The Leaky B@@b, informing its founder Jessica Martin-Weber that she had “violated our terms of use”. Facebook said later the deletion had been a “mistake” and the page was reinstated. The site says it generally tries to “respect people’s right to share content of personal importance”.

Inciting violence

A Facebook page calling for Palestinians to take to the streets in a violent uprising against Israel was removed in 2011. The page, entitled “Third Palestinian Intifada”, had already attracted the condemnation of the Israeli government, which said it was inciting violence against Jews. Facebook said while the page began as a call for “peaceful protest”, it had descended into “direct calls for violence or expressions of hate”.

Free speech

Facebook refused to remove a “fan” page dedicated to James Holmes, the man accused of shooting 12 people dead at a cinema in Colorado in 2012. Facebook said the page “while incredibly distasteful, doesn’t violate our terms” as it was within the boundaries of free speech.

Suicide prevention

In 2013, New York police intercepted a teenager who was on his way to jump off a bridge after he posted a message on Facebook. When officers were alerted to post, they sent him a message on the site and handed out his photo to nearby patrols. He called the police station and was taken to hospital.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in