Platforms can empower groups that have previously been silenced. However, platforms also host hateful and illegal content, often targeted at minorities, and content is prone to being unfairly censored by algorithmically biased moderation systems. This report analyzes the current environment of content moderation, particularly bringing to light negative effects for the LGBTIQA+ community, and provides policy recommendations for the forthcoming negotiations on the EU Digital Services Act.
Existing content moderation practices, both algorithmically-driven and people-determined, are rooted in white colonialist culture. Black women’s opinions, experiences, and expertise are suppressed and their online communication streams are removed abruptly, silently, and quickly. This paper explores algorithmic misogynoir in content moderation and makes the case for the regular examination of the impact of content moderation tactics on Black women and other minoritized communities.
On the EU’s periphery, Serbia has deployed enough biometric surveillance technology from China’s Huawei for law enforcement and “Safe City” solutions to cover practically all of Belgrade’s public spaces. Public pressure has raised the bar for turning on the technology, but the alarming project illustrates the need for transparent regulation of such systems everywhere, to ensure the protection of fundamental human rights.
Trade agreements have become an important battleground for tech companies to fight the regulatory pressure they are finally facing in the Global North. But allowing tech companies to capture digital trade talks to defang domestic regulation creates serious risks for privacy, fundamental rights, competition, social and economic justice, and sustainable development.