Hate speech in Myanmar continues to thrive on Facebook
Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar, Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show.
Three years ago, the company commissioned a report that found Facebook was used to “foment division and incite offline violence” in the country. It pledged to do better and developed several tools and policies to deal with hate speech. But scrolling through Facebook today, it’s not hard to find posts threatening murder and rape in Myanmar.
The breaches have continued -- and even been exploited by hostile actors -- since the Feb. 1 military takeover this year that resulted in gruesome human rights abuses across the country.
One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.
“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”
One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”
Despite the ongoing issues, Facebook saw its operations in Myanmar as both a model to export around the world and an evolving and caustic case. Documents reviewed by AP show Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.
Facebook’s internal discussions on Myanmar were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.
Facebook has had a shorter but more volatile history in Myanmar than in most countries. After decades of censorship under military rule, Myanmar was connected to the internet in 2000. Shortly afterward, Facebook paired with telecom providers in the country, allowing customers to use the platform without needing to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.
Htaike Htaike Aung, a Myanmar internet policy advocate, said it also became “a hotbed for extremism” around 2013, coinciding with religious riots across Myanmar between Buddhists and Muslims It’s unclear how much, if any, content moderation was happening at the time by people or automation.
Htaike Htaike Aung said she met with Facebook that year and laid out issues, including how local organizations were seeing exponential amounts of hate speech on the platform and how its preventive mechanisms, such as reporting posts, didn’t work in the Myanmar context.
One example she cited was a photo of a pile of bamboo sticks that was posted with a caption reading, “Let us be prepared because there’s going to be a riot that is going to happen within the Muslim community.”
Htaike Htaike Aung said the photo was reported to Facebook, but the company didn’t take it down because it didn’t violate any of the company’s community standards.
“Which is ridiculous because it was actually calling for violence. But Facebook didn’t see it that way,” she said.
Years later, the lack of moderation caught the attention of the international community. In March 2018, United Nations human rights experts investigating attacks against Myanmar's Muslim Rohingya minority said Facebook had played a role in spreading hate speech.
When asked about Myanmar a month later during a U.S. Senate hearing, CEO Mark Zuckerberg replied that Facebook planned to hire “dozens” of Burmese speakers to moderate content, would work with civil society groups to identify hate figures and develop new technologies to combat hate speech.
“Hate speech is very language specific. It’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically,” Zuckerberg said.
Information in internal Facebook documents show that while the company did step up efforts to combat hate speech in the country, the tools and strategies to do so never came to full fruition, and individuals within the company repeatedly sounded the alarm. In one document from May 2020, an employee said a hate speech text classifier that was available wasn’t being used or maintained. Another document from a month later said there were “significant gaps” in misinformation detection in Myanmar.
“Facebook took symbolic actions I think were designed to mollify policymakers that something was being done and didn’t need to look much deeper,” said Ronan Lee, a visiting scholar at Queen Mary University of London’s International State Crime Initiative.
In an emailed statement to the AP, Rafael Frankel's, Facebook's director of policy for APAC Emerging Countries, said the platform “has built a dedicated team of over 100 Burmese speakers.” He declined to state exactly how many were employed. Online marketing company NapoleonCat estimates there are about 28.7 million Facebook users in Myanmar.
During her testimony to the European Union Parliament on Nov. 8, Haugen, the whistleblower, criticized Facebook for a lack of investment in third-party fact-checking, and relying instead on automatic systems to detect harmful content.
“If you focus on these automatic systems, they will not work for the most ethnically diverse places in the world, with linguistically diverse places in the world, which are often the most fragile,” she said while referring to Myanmar.
After Zuckerberg’s 2018 congressional testimony, Facebook developed digital tools to combat hate speech and misinformation and also created a new internal framework to manage crises like Myanmar around the world.
Facebook crafted a list of “at-risk countries” with ranked tiers for a “critical countries team” to focus its energy on, and also rated languages needing more content moderation. Myanmar was listed as a “Tier 1” at-risk country, with Burmese deemed a “priority language” alongside Ethiopian languages, Bengali, Arabic and Urdu.
Facebook engineers taught Burmese slang words for “Muslims” and “Rohingya” to its automated systems. They also trained systems to detect “coordinated inauthentic behavior” such as a single person posting from multiple accounts, or coordination between different accounts to post the same content.
The company also tried “repeat offender demotion” which lessened the impact of posts of users who frequently violated guidelines. In a test in two of the world’s most volatile countries, demotion worked well in Ethiopia, but poorly in Myanmar -- a difference that flummoxed engineers, according to a 2020 report included in the documents.
“We aren’t sure why … but this information provides a starting point for further analysis and user research,” the report said. Facebook declined to comment on the record if the problem has been fixed a year after its detection, or about the success of the two tools in Myanmar.
The company also deployed a new tool to reduce the virality of content called “reshare depth promotion” that boosts content shared by direct contacts, according to an internal 2020 report. This method is “content-agnostic” and cut viral inflammatory prevalence by 25% and photo misinformation by 48.5%, it said.
Slur detection and demotion were judged effective enough that staffers shared the experience in Myanmar as part of a “playbook” for acting in other at-risk countries such as Ethiopia, Syria, Yemen, Pakistan, India, Russia, the Philippines and Egypt.
While these new methods forged in Myanmar’s civil crises were deployed around the world, documents show that by June 2020 Facebook knew that flaws persisted in its Myanmar safety work.
“We found significant gaps in our coverage (especially in Myanmar and Ethiopia), showcasing that our current signals may be inadequate,” said an internal audit of the company’s “integrity coverage.” Myanmar was color-coded red with less than 55% coverage: worse than Syria but better than Ethiopia.
Haugen criticized the company’s internal policy of acting “only once a crisis has begun.”
Facebook “slows the platform down instead of watching as the temperature gets hotter, and making the platform safer as that happens,” she said during testimony to Britain’s Parliament on Oct. 25.
Frankel, the Facebook spokesperson, said the company has been proactive.
“Facebook’s approach in Myanmar today is fundamentally different from what it was in 2017, and allegations that we have not invested in safety and security in the country are wrong,” Frankel said.
Yet, a September 2021 report by the Myanmar Social Media Insights Project found that posts on Facebook include coordinated targeting of activists, ethnic minorities and journalists -– a tactic that has roots in the military’s history. The report also said the military is laundering its propaganda through public pages that claim to be media outlets.
Opposition and pro-military groups have used the encrypted messaging app Telegram to organize two types of propaganda campaigns on Facebook and Twitter, according to an October report shared with the AP by Myanmar Witness, a U.K.-based organization that archives social media posts related to the conflict.
Myanmar is a “highly contested information environment,” where users working in concert overload Facebook’s reporting system to take down others’ posts, and also spread coordinated misinformation and hate speech, the report said.
In one example, the coordinated networks took video shot in Mexico in 2018 by the Sinaloa cartel of butchered bodies and falsely labeled it as evidence of the opposition killing Myanmar soldiers on June 28, 2021, said Benjamin Strick, director of investigations for Myanmar Witness.
“There’s a difficulty in catching it for some of these platforms that are so big and perhaps the teams to look for it are so small that it’s very hard to catch water when it’s coming out of a fire hydrant,” he said.
The organization also traced the digital footprint of one soldier at the incineration of 160 homes in the village of Thantlang in late October. He posed in body armor on a ledge overlooking burning homes, with a post blaming opposition forces for the destruction in a litany of violent speech.
Facebook "conducted human rights due diligence to understand and address the risks in Myanmar,” and banned the military and used technology to reduce the amount of violating content, spokesperson Frankel said.
Yet Myanmar digital rights activists and scholars say Facebook could still take steps to improve, including greater openness about its policies for content moderation, demotion and removal, and acknowledging its responsibilities toward the Myanmar people.
“We need to start examining damage that has been done to our communities by platforms like Facebook. They portray that they are a virtual platform, and thus can have lower regulation,” said Lee, the visiting scholar. “The fact is that there are real-world consequences.”