DossierDiscrimination by moderation

How to address gender and racial bias in content moderation

The way in which online platforms such as YouTube, Instagram, Twitter, Facebook or TikTok filter and moderate content can cause harm to marginalized groups. The algorithms used by social media networks to rate, manage and moderate posts reflect biases in our societies. Hateful content, which is attacking gender or racial minorities, remains online, whereas legitimate posts by those minorities are removed. Once content has been taken down, it is difficult to contest a platform’s decision. This has serious implications for freedom of expression.

Legislators are struggling to address discrimination and hate speech on online platforms. Amid the EU discussion on the draft Digital Services Act (DSA) and the US debate on reforming Section 230 of the US Communications Decency Act, the authors of these two reports address the effects of current practices of platform moderation on LGBTIQA+ communities on Black women in the United States and propose an intersectional perspective for future legislation.

Online event: Discrimination by moderation

June 21, 2021

Panel discussion with:

  • Christina Dinar, Deputy Director at the Centre for Internet and Human Rights, expert on anti-discrimination strategies online
  • Brandeis Marshall, Data scientist and Full Professor of Computer Science at Spelman College, Black education activist for Black women thriving in data and tech careers
  • Tiemo Wölken, Member of the European Parliament (Progressive Alliance of Socialists and Democrats), shadow rapporteur for the opinion of the Legal Affairs Committee on the Digital Services Act
  • Shakira Smith, Research Director at Salty Algorithmic Bias Collective, scholar-practitioner-advocate and dual-PhD candidate at Indiana University

Moderator: Jennifer Baker, independent journalist reporting on tech policy & digital rights