Header Ad

Outsourced content moderation on Facebook, etc, may have led to offline violence in Sri Lanka – NYU study

Image credit: Benjamin KRAFT/Flickr

ECONOMYNEXT – The largely outsourced content moderation efforts by social media giants such as Facebook coupled with a distinct lack of local expertise have helped spread incendiary content online, possibly leading to offline violence in countries like Sri Lanka, a New York University (NYU) study has found.

The study, conducted by the NYU Stern Center for Business and Human Rights, also found that the “peripheral status” of moderators employed by Facebook and other companies has contributed to inadequate attention being paid to incendiary content spread in developing countries.

In a report issued earlier this week, the NYU highlighted both anti-Muslim and anti-non-Muslim violence fed by Facebook content. These included the riots in Digana in March 2018 which led to the death of at least three people and last year’s Easter Sunday attacks which killed 259 and injured at least 500.

“Local Sri Lankan activists informed Facebook of the building antagonism,” the report said referring to developments that had led up to the violence in Digana, “but received the baffling response that the vitriolic content didn’t violate the company’s standards.”

Similar warnings were made to Facebook by “moderate Muslims”, the report noted, of calls for violence against non-Muslims by Zahran Hashim, the alleged mastermind of the Easter bombings.

“Moderate Muslims had made concerted efforts to warn Facebook about Hashim’s incendiary Tamil-language posts, according to The Wall Street Journal, and Facebook removed much of the content before the carnage. But about 10 posts, including the exhortation to kill non-Muslim women and children, remained visible on the platform through the time of the attacks and for days afterward,” the NYU said.

In May this year, Facebook apologised for its role in the Digana violence. Following an internal probe, the company acknowledged that content on its social media platform may have led to the violence that targeted Muslims in Sri Lanka.

“We deplore the misuse of our platform,” Facebook told Bloomberg News. “We recognise, and apologise for, the very real human rights impacts that resulted.”

Article One, an ethics and human rights consultancy firm hired by Facebook to conduct its investigation, concluded in a Human Rights Impact Assessment (HRIA) issued in May that “the Facebook platform contributed to spreading rumors and hate speech, which may have led to ‘offline’ violence.”

According to Article one, 4.4 million out of Sri Lanka’s 21 million population are active Facebook users and, despite warnings from civil society activists since as far back as 2009, the platform had only two resource persons to review content in Sinhala. However, the company has since recruited more content moderators, including Sinhala speakers, according to statements attributed to Facebook.

The Article One HRIA further noted that Facebook’s lack of formal human rights due diligence in Sri Lanka prior to the investigation and the limited cultural and language expertise among Facebook staff may have contributed to offline harm stemming from online engagement.

“This was potentially exacerbated by now phased out algorithms designed to drive engagement on the platform, regardless of the veracity or intention of the content,” it added.

Meanwhile, the NYU report said that a Facebook spokesperson had declined to comment on Article One’s findings. However, in a written response to the HRIA, Facebook had noted that since 2018, it has hired policy and program managers to work full-time with local stakeholders in Sri Lanka.

The NYU report further said, quoting Facebook, that the company also “deployed proactive hate speech detection technology in Sinhala” and that “dozens” of additional Sinhala- and Tamil-speaking content moderators are now reviewing Sri Lankan content.

Despite Facebook’s assurances, CEO Mark Zuckerberg’s controversial calls for “more free speech, not less” has led to questions about the company’s sincerity in addressing the concerns raised.

Speaking to EconomyNext, Deputy Director of the NYU Stern Center for Business and Human Rights Paul Barrett, who authored the report, said that Facebook realises it hasn’t done enough.

“Based on conversations I’ve had with Facebook executives, I believe they realise the company hasn’t done enough to moderate content in Sri Lanka. They’ve taken some reasonable steps to correct the situation, but it isn’t clear whether those steps are sufficient. Time will tell,” he said.
The study makes the following recommendations to social media companies to address the concerns highlighted.

  • End the process of outsourcing content moderation and bring more moderators in-house.
  • Double the number of content moderators to increase capacity.
  • Hire a content moderation czar to oversee all content moderation operations.
  • Expand content moderation in at-risk countries where online-fueled violence is likely.
  • Provide moderators with better medical care and sponsor research into the health effects of content moderation.
  • Explore limited government regulation of content moderation, including a Facebook proposal for focusing on the prevalence of harmful content.
  • Do more to remove mis- and disinformation from social media platforms.

The NYU report has emphasised the psychological well-being of content moderators, noting that third-party moderators employed by social media companies tend to be marginalised as “second class citizens,” to say nothing of the mental cost of monitoring a steady stream of harmful content daily.

“Outsourced content reviewers receive inadequate mental health care. They also work in unruly, distracting environments. And as a result of the marginalisation of these reviewers, inadequate attention has been paid to content moderation in countries such as Sri Lanka,” said Barrett.

The study calls for increased in-house content moderation, which may prove to be a challenge, given the enormity of Facebook’s user base and its vast global presence, with billions of users posting content in over a hundred languages every day. Asked to comment on this, Barrett told EconomyNext:  “You have to start somewhere and seek incremental improvement, as opposed to surrendering to the enormity of the task.”

Of particular interest is a recommendation to explore limited government regulation of content moderation. Would this not pose a threat to free expression online, particularly in countries with authoritarian governments?

“That would be a danger in countries with authoritarian governments. In such countries, I would be very hesitant to encourage any government involvement with content regulation. But that doesn’t mean that democracies should not even consider potential regulation on a very limited basis,” said Barrett in response.

Asked what more the giant tech companies of the world, such as Facebook, Google and Twitter, can do to minimise harm, he said: “They can vigorously enforce the community standards that they have in place. They can hire more content moderators and give them the status and stature they deserve within the companies. They can also engage with far more fact-checkers to seek out misinformation.” (Colombo/Jun25/2020)

Click here for the NYU Stern report.
Click here for the Article One HRIA.

Tags :