No amount of “AI” in content moderation will solve filtering’s prior-restraint problem

@article{Llanso2020NoAO,
  title={No amount of “AI” in content moderation will solve filtering’s prior-restraint problem},
  author={Emma Llanso},
  journal={Big Data \& Society},
  year={2020},
  volume={7}
}
  • Emma Llanso
  • Published 1 January 2020
  • Political Science
  • Big Data & Society
Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the solution to complex challenges around online content, promising that AI is a scant few years away from resolving everything from hate speech to harassment to the spread of terrorist propaganda. Missing… 
The Limits of International Law in Content Moderation
In remarkably short order, there has been growing convergence around the idea that major social media platforms should use international human rights law (IHRL) as the basis for their content
"How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation
TLDR
By analyzing video content creation as algorithmic labor, this work unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice.
Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator
When platforms use algorithms to moderate content, how should researchers understand the impact on moderators and users? Much of the existing literature on this question views moderation as a series
On Simulating the Propagation and Countermeasures of Hate Speech in Social Networks
Hate speech expresses prejudice and discrimination based on actual or perceived innate characteristics such as gender, race, religion, ethnicity, colour, national origin, disability or sexual
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
TLDR
The capabilities and limitations of tools for analyzing online multimedia content are explained and the potential risks of using these tools at scale without accounting for their limitations are highlighted.
”A Tale on Abuse and Its Detection over Online Platforms, Especially over Emails”: From the Context of Bangladesh
TLDR
There exists a significant demand for abuse detection systems over emailing platforms even after having a lesser frequency of abuse occurring over emails, and several limiting factors associated with a human-moderator-based abuse detection system are identified, including less comfort, less trust in different types of moderators, inhumane demands to the moderators, and time delay in detecting abuses.
Content-Oblivious Trust and Safety Techniques: Results from a Survey of Online Service Providers
TLDR
It is demonstrated that the impact of end-to-end encryption (which, controversially, impedes outside access to user content) on abuse detection may vary by abuse type, and that content-dependent techniques do not constitute a silver bullet to protect users against abuse.
Artificially Intelligent and Inclusive by Design: A Human-Centered Approach to Online Safety
TLDR
Practical and ethical questions related to the inclusive design of AI-supported safety mechanisms in social media are discussed by proposing systems that are artificially intelligent and inclusive by design.
A corpus analysis of online news comments using the Appraisal framework
We present detailed analyses of the distribution of Appraisal categories (Martin and White, 2005) in a corpus of online news comments. The corpus consists of just over one thousand comments posted in
...
1
2
...

References

SHOWING 1-3 OF 3 REFERENCES
The Beginning of the End of Internet Freedom
Although the Internet was initially viewed as a medium for expression in which censorship would be impossible to implement, recent developments suggest exactly the opposite. Countries around the
The New Governors: The People, Rules, and Processes Governing Online Speech
Private online platforms have an increasingly essential role in free speech and participation in democratic culture. But while it might appear that any internet user can publish freely and instantly
Old School/New School Speech Regulation
In the early twenty-first century the digital infrastructure of communication has also become a central instrument for speech regulation and surveillance. The same forces that have democratized and