Many forms of online harm are perceived, and remedies are sought from platform providers and from the state. The harms range from online bullying and intellectual property violations to incitement to, or facilitation of, violence.
Because platforms enable extremely rapid and articulated dissemination of user-generated content (UGC), remedies obtained through court orders or administrative actions fail to be fully responsive to the aggrieved parties and state authorities. Therefore, the responsibility for remedial action to remove or reduce the reach of UGC perceived as causing harm tends to fall on the platform providers.
Most platform providers have therefore put in place various modalities of content moderation, ranging from algorithmic takedowns and de-prioritization through moderation by human agents to bans and suspensions of those deemed to be repeat offenders. This may be in the form of “private regulation” and soft or hard co-regulation whereby the state requires the platform providers to act in specified ways.
The state also engages in efforts to directly control UGC, usually in the form of ex-post prosecution of content originators and disseminators deemed to have committed an offense set out in a law. While the Virtual Dialogue sought to focus attention on content moderation and the regulation thereof, it was found in the course of the Dialogue that stakeholder positions on the regulation of content moderation practices were influenced by the actions or plans of state authorities with regard to direct control of UGC, therefore it will be mentioned as relevant in the report.
This report was developed through an Expert Round Table discussion on “Online harms: Content moderation and models of regulation” held on the 27th of October 2022, as the third of a series of discussions under the theme of “Frontiers of Digital Economy”
The “Frontiers of Digital Economy” series is supported and sponsored by Meta.
Comments are closed.