Gariwo
QR-code
https://en.gariwo.net/magazine/memory/countermeasures-moderation-and-its-limits-28060.html
Gariwo Magazine

Countermeasures: moderation and its limits

third part of the dossier "How to heal hate wounds"

HOW TO HEAL HATE WOUNDS, Dossier edited by Bianca Senatore

In this and three other articles, we take up part of the dossier ‘How to heal hate wounds’, published by Gariwo Foundation on the occasion of the Holocaust Remembrance Day 2025

---

INTRODUCTION

In recent times, it feels as though the world has become harsher and more hostile. Confrontation, nationalism, and polarisation have replaced dialogue, cooperation, and reason. As this shift takes hold, the very concept of democracy begins to falter, eroding the foundations of society as we have known it.

One of the signs of this societal illness is hatred, which has evolved from being a symptom of widespread discontent into a dangerous affliction in its own right, infecting more and more people. When the world becomes a source of fear, we retreat into comfortable bubbles, where everything reflects our own views and nothing seems out of place. But when that bubble begins to crack, those who are different—those who think differently—become targets to be mocked, attacked, insulted, and even destroyed through the weapon of hatred.

Hatred wounds and perpetuates itself, much like a flame that draws strength from oxygen, spreading harm to anyone in its path: women, migrants, Jews, the LGBTQIA+ community. With this dossier, Gariwo aims to examine the wounds caused by hatred and offer a clearer understanding of the situation. But it also seeks to provide a kind of antidote, so that together, we can begin the healing process that has become ever more urgent.

Every 27th of January, on the International Holocaust Remembrance Day, we say “Never again.” With this work, the Gariwo Foundation seeks to reaffirm that to truly give meaning to that pledge, we must remain steadfast in our commitment to combating the disease of hatred. Understanding the origins of hatred, recognising its mechanisms, and healing its wounds are essential steps in ensuring that the atrocities of the past are never repeated.

---
The fight against hate speech is a complex and pressing challenge, especially when it comes to determining responsibility and implementing effective regulations. This issue has become even more urgent in recent years, particularly following Donald Trump’s election and the apparent alignment of Big Tech with his influence, including their prominent support during his inauguration. At such a time, the stakes are higher, and the risk exists of important issues being sidelined. While existing legal frameworks offer some guidance, additional measures and approaches are needed to help us navigate interactions in a more respectful, civilised and inclusive way. One of the primary tools in addressing this issue in recent years has been content moderation, which is carried out by humans, algorithms and, more recently, artificial intelligence (AI). However, the moderation of online hate speech and misinformation has produced mixed results. "It is clear that fact-checking has not fully addressed the problem; we must acknowledge this", says Federico Faloppa, Coordinator of the National Network for Combating Hate Speech and Hate Phenomena. “Our analysis of millions of pieces of content has shown that fact-checking often fails to resolve the issue and, in some cases, even exacerbates it, reinforcing polarisation and the creation of echo chambers – closed-off environments where only similar viewpoints are reinforced”. Despite these shortcomings, Faloppa notes that fact-checking remains valuable. “Even with its limitations, it has proven essential for understanding certain events, curbing disinformation and reducing harmful content. In fact, when the most despicable comments are deleted and the author no longer posts them, this can disrupt harmful chains of interaction, preventing further spread”. While this is certainly a small drop in the vast ocean of online hate speech, the overall evidence suggests that online moderation has significant limitations, driven by numerous variables. Linguist Vera Gheno agrees with this assessment, pointing out that algorithms often censor subjects rather than actual hate content. "What is the point of censoring discussions about Palestine?” Gheno asks. “That is not hate speech; it is a hot-button topic that the platform simply wants to avoid”. She continues. “Moreover, every time I have reported hate content aimed at me, the platform has ruled it as perfectly acceptable”. This highlights one of the major flaws in current moderation systems, especially when artificial intelligence plays a role. “AI does not allow for fully free expression”, Gheno adds. “For example, if I say a vulgar word using voice-to-text software, the transcription might censor it with asterisks, and I no longer have control over how or where those changes are made”. Gheno also points out a troubling incident with Gemini, Google's AI, which was temporarily suspended following a request from a user to generate an image of German soldiers from 1943 and then producing a picture of a white soldier, a black soldier, and two Asian women. “While we may recognise these misrepresentations now”, she warns, “there is a real risk that future generations will lose sight of the truth, leading to a world full of false content, the origins of which we have forgotten”. Gheno expresses concern that too little attention is being paid to the potential dangers of AI’s influence, although she acknowledges that, like any tool, AI can be used positively.

AI control should always be paired with human fact-checking, but this is exactly the point Mark Zuckerberg, CEO of Meta, addressed. The founder of Facebook, along with other Silicon Valley billionaires, quickly aligned himself with the powers that be, and when it comes to Donald Trump, there is reason to be concerned. His first decision was to drastically reduce moderation in the name of free speech, at least in the United States. "Zuckerberg” – as Faloppa pointed out – “with this move, has chosen to remove the restrictions on sensitive topics that were previously in place, accusing fact-checkers of making ‘censorship errors’ due to ideological bias”. In this regard, Zuckerberg, who has also sought approval from Trump’s new ally, Elon Musk, announced that the platform would work with the president to fight against foreign governments pressuring American companies to censor more. “It is no coincidence that Meta is moving about 200 offices from California, a more liberal environment, to Texas, a much more conservative state. The goal is to adjust moderation to the political climate, promoting free speech that may pay less attention to discrimination and hate speech. This situation is certainly alarming”, said Faloppa.

The effects of this ongoing deregulation in the United States may not have disastrous consequences in Europe, at least for now. This is because the European Parliament has equipped itself with a specific tool - the Digital Services Act (DSA), which, along with the Digital Markets Act, is the most important legislation in the European Union regarding e-commerce, illicit and illegal content, online advertising transparency, and disinformation. “With this regulation”, explain those from the National Network for Combating Hate Speech and Hate Phenomena -

With this law, which updates the e-Commerce Directive of 2000, the European Parliament has decided to regulate intermediaries and online platforms comprehensively. This approach aims to prevent, or at least limit, the spread of illegal, hate-filled or disinformation content, ensuring better protection and enforcement of users' fundamental rights. The Digital Services Act bans, for example, the profiling of underage users for advertising purposes based on personal data such as ethnicity, political views, or sexual orientation. It also prohibits the use of so-called 'dark patterns' designed to manipulate users into making unwanted choices. As Faloppa pointed out, "if platforms fail to comply with the regulation, they can face significant fines. And if, after multiple warnings, they still do not meet the requirements, their service could be suspended for a year, which would be a huge blow for platforms that generate daily profits”. Some, platforms argue that these regulations are too strict, while others claim they introduce unnecessary intermediaries. What does this mean? Essentially, the law requires that access to data be facilitated by parties that can provide the requested information. This creates an independent intermediary system that ensures data circulation and accessibility. However, many critics believe this system is cumbersome and could interfere with monitoring efforts, making it more difficult to track hate speech. In any case, the message to platforms is clear: collaborate! It is in your best interest to create environments where everyone feels safe and secure and that is how it works if you want to operate in Europe. Unfortunately, some platforms are already under investigation for non-compliance. Nevertheless, these rules are crucial because they address not only privacy and discrimination, but also impact the communication systems of entire countries.

---

Read the first article here

Read the second article here

Bianca Senatore, Gariwo Editorial Staff

21 January 2025