By Maxime Bérubé, SSHRC Postdoctoral Fellow, SOMEONE Project and UNESCO-PREV Chair
Since the beginning of the 21st century, most societies and extremist movements worldwide have invested in digital spaces. Some extremists take advantage of the democratization of the Internet and the advent of social networks to promote terrorist organizations and violence. To undermine their efforts, a growing number of state leaders and technology companies are joining a recent initiative known as the “Christchurch Call.” This initiative, launched by the French and New Zealand governments, proposes the use of global means to combat the promotion and incitement to hatred and violence in the digital context. As laudable as this approach may be, it does, however, offer many legal, ethical and technical challenges.
Although many States have significantly strengthened their legislative framework to combat violent extremism in recent years, adjustments to this effect vary from country to country and aren’t applied uniformly. In Canada, unlike France for example, a judge must agree with the deletion of digital content deemed illegal. As the issues related to the dissemination of this online material transcend borders, it becomes necessary to maintain consistency regarding legislative frameworks. Thus, allowing for a uniform application of the established procedures. In the same vein, standardization must also be applied to the conditions of use for the different platforms where the problematic content is distributed. Otherwise, its diffusion and practices might possibly adapt, resulting in a partial and fragmented solution. Additionally, the United States, an essential player in this fight, hasn’t joined the call.
Ethical and Technical Limits
If global measures can be applied to quickly or instantly remove undesirable content from the digital realm, certain technological limits are important to consider. Firstly, collaboration with countless players in the web industry is essential, as only they can control the activities related to their platforms. Although the main and wealthiest web “giants” seem to have followed suit, technology companies with insufficient resources for this purpose, and who are also popular with violent extremist movements, will still require incentives. Also, current technologies prevent immediate deletion. Content is usually removed within 24 hours or less, but often after numerous users have consulted it. Inevitably, logistical and technical delays influence such practices because deletion by computerized systems still requires human supervision to avoid the removal of content that shouldn’t be censored, thus respecting freedom of expression. In recent years, several thousand web pages and social media accounts have been shut down or closed. While this may reduce the quantity and scope of extremist content or terrorist propaganda in the digital realm, the very nature of the Internet makes it highly unlikely that we can hope to remove all controversial content.
Finally, without wanting to sound pessimistic, this call would appear to be facing significant obstacles that require great thought. This repressive approach also lacks unanimity because of the limitations outlined above. To remedy this situation—without undermining the principles of freedom of expression—a more educational approach would be advantageous. Rather than trying to eliminate content from the digital realm or quickly censoring statements, redirecting this initiative towards a globalized strategy of social education—focused primarily on the development of critical thinking and digital literacy—would provide concrete long-term results, and widespread societal benefits.