• Corpus ID: 246285405

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis

@article{Shenkman2022DoYS,
  title={Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis},
  author={Carey Shenkman and Dhanaraj Thakur and Emma Llans'o},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.11105}
}
The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve… 
Outside Looking In: Approaches to Content Moderation in End-to-End Encrypted Systems
TLDR
It is found that technical approaches for user-reporting and meta-data analysis are the most likely to preserve privacy and security guarantees for end- Users in End-to-End Encryption services.

References

SHOWING 1-10 OF 103 REFERENCES
Mixed Messages? The Limits of Automated Social Media Content Analysis
TLDR
Recommendations are made for NLP researchers to bridge the knowledge gap between technical experts and policymakers, including clearly describe the domain limitations of NLP tools and increase development of non-English training resources.
Algorithmic content moderation: Technical and political challenges in the automation of platform governance
TLDR
It is shown that even ‘well optimized’ moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms, as it is demonstrated these systems remain opaque, unaccountable and poorly understood.
No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
TLDR
It is crucial to recall why legal protections for speech have included presumptions against prior censorship, and consider carefully how proactive content moderation will fundamentally re-shape the relationship between rules, people, and their speech.
Metadata-Based Detection of Child Sexual Abuse Material
TLDR
The aim is to provide a tool that is material type agnostic (image, video, PDF), and can potentially scans thousands of file storage systems in a short time, and achieves an accuracy of 97% and recall of 94%.
A Highly Robust Audio Fingerprinting System With an Efficient Search Strategy
TLDR
An audio fingerprinting system that uses the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified.
Adult Content in Social Live Streaming Services: Characterizing Deviant Users and Relationships
TLDR
This work uses a pre-trained deep learning model to identify broadcasters of adult content in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis.
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
TLDR
It is found that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.
The Death of Fair Use in Cyberspace: YouTube and the Problem With Content ID
YouTube has grown exponentially over the past several years. With that growth came unprecedented levels of copyright infringement by uploaders on the site, forcing YouTube’s parent company, Google
Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
TLDR
The aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it.
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
TLDR
This work introduces AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics, and provides a taxonomy to help entities requiring explanations to navigate the space of explanation methods.
...
...