OPINION: Meta and hate speech in India

Deborah Brown and Jayshree Bajoria are senior researchers at Human Rights Watch

Last week, Meta, formerly Facebook, released its first human rights report, which included some snippets from the pending Human Rights Impact Assessment on India. But this was not a preview of the full assessment. Rather, Meta told Human Rights Watch that it does not “have plans to publish anything further on the India HRIA,” an abdication of its commitment to transparency and due diligence.

The India assessment, which the company commissioned in 2019, was meant to independently evaluate Meta’s role in spreading hate speech and incitement to violence on its services in India, following criticism of the company by civil society groups. What was published last week gets us no closer to understanding Meta’s responsibility, and therefore its commitment to addressing the spread of harmful content in India. Instead, it deflected blame, focusing on the role of third parties and end users.

This demonstrates a continued disregard for the serious human rights concerns that civil society groups have been raising for years, and that prompted the assessment in the first place. Last week’s report does not include recommendations made to Meta, or a meaningful analysis of Meta’s content moderation practices and amplification systems. Nor does it include any commitments to change its policies or practices in response. It indicated that the assessment “noted the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties” and stated Meta “faced criticism and potential reputational risks related to risks of hateful or discriminatory speech by end users.”

While some users of Facebook in India and elsewhere undoubtedly use the platform to spread harmful content, it is still Meta’s platform. It is crucial for the company to candidly examine and disclose how its own actions and inactions contribute to human rights abuses, including the role of its algorithms in amplifying incitement to violence and hate speech, and its lack of adequate investment in content moderation in India. It is documented that Meta is aware that its core mechanisms such as virality, recommendations, and optimising for engagement, are significant reasons why hate speech, misinformation and divisive political speech flourish on the platform.

Disclosures by two whistleblowers, Frances Haugen and Sophie Zhang, including internal Facebook documents, indicate that the company was aware of fake accounts used to mislead social discourse, and hate speech and content inciting violence against minorities on its services in India, but failed to take adequate action. Facebook also failed to apply its own hate-speech rules to individuals and groups belonging to or affiliated with the ruling Bharatiya Janata Party even when they were flagged internally.

Meta said that the India assessment did not gauge or reach conclusions about whether bias in content moderation existed, a key allegation repeatedly made by civil society groups and widely reported. Meta has not explained why such a critical issue was not examined.

Following reports that Meta may have narrowed the scope of, or watered down the findings of the assessment, more than 20 rights groups, including Human Rights Watch, wrote to Meta’s head of human rights urging the release of the complete India assessment without delay.

Meta’s refusal to release an unredacted and complete version of the India assessment is not in line with the United Nations Guiding Principles on Business and Human Rights, which make clear that providing the public a sufficient level of information to assess whether a company is addressing its human rights impacts is a key responsibility. It further erodes trust with Indian civil society, and contributes to the perception that Meta’s human rights impact assessments are designed to deflect criticism and divert civil society’s efforts to hold platforms like Meta accountable.