In a recent Communications column,9 Moshe Vardi asked how to regulate speech on social media platforms. Vardi reminds us Facebook's 50,000 employees require algorithms to achieve content moderation given the platform's massive number of users and contributions. While the challenges are evident, algorithms alone may not be the solution. We are facing a large-scale, collective problem that requires not just technical solutionism, but in-depth consideration of some of society's most basic values. One such value, freedom of expression, requires thoughtful discussion about how to guarantee it in the digital world—including where its limits should lie, who defines those limits, and how to execute them. This includes questions of technical approaches to content moderation, definitions of harmful content, and more fundamentally, what to expect from online public discourse?
Online social media platforms have become essential components of our societies as shapers of opinions. Both small regional forums and large social networks provide key infrastructure for public discourse in the digital public sphere.8 Some, such as Facebook, have sizeable power reaching more than two billion people and up to 90% of some countries' entire populations.a The networks can largely exercise discretion in managing the discourse, for example, by selecting or prioritizing comments. Although some countries have enacted regulation, it is mostly limited to incentivizing platforms to remove illegal content and more recently, to removing content in response to complaints.b Often, these laws do not directly prescribe content removal nor how it should be done but instead encourage platforms by way of potential penalties for hosting illegal content.c Most regulation is negative, that is, there is little regulation that grants the right to get published on social media. Otherwise, Facebook, for example, uses a single set of community guidelines for the planet.
No entries found