• Corpus ID: 237420397

Increasing Adversarial Uncertainty to Scale Private Similarity Testing

  title={Increasing Adversarial Uncertainty to Scale Private Similarity Testing},
  author={Yiqing Hua and Armin Namavari and Kai-Wen Cheng and Mor Naaman and Thomas Ristenpart},
Social media and other platforms rely on automated detection of abusive content to help combat disinformation, harassment, and abuse. One common approach is to check user content for similarity against a server-side database of problematic items. However, this method fundamentally endangers user privacy. Instead, we target client-side detection, notifying only the users when such matches occur to warn them against abusive content. Our solution is based on privacy-preserving similarity testing… 

Figures and Tables from this paper

How Different Groups Prioritize Ethical Values for Responsible AI

Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their



Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership Computation

This work explores the technical feasibility of privacypreserving perceptual hash matching for E2EE services, formalizing the problem space and identifying fundamental limitations for protocols and design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users.

CrypTen: Secure Multi-Party Computation Meets Machine Learning

Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it

Increasing adversarial uncertainty to scale private similarity testing

  • arXiv preprint arXiv:2109.01727,
  • 2021

EMP-toolkit: Efficient MultiParty computation toolkit

  • https://github.com/emp-toolkit,
  • 2016

The Bayes Security Measure

This paper studies the Bayes security measure, which quantifies the expected advantage over random guessing of an adversary that observes the output of a mechanism, and shows that the minimizer of this measure indicates its security lower bound.

Adapting Security Warnings to Counter Online Disinformation

This work adapts methods and results from the information security warning literature in order to design and evaluate effective disinformation warnings, and provides evidence that disinformation warnings can -- when designed well -- help users identify and avoid disinformation.

Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections

This study measures the adversarial interactions against candidates for the US House of Representatives during the run-up to the 2018 US general election, and develops a new technique for detecting tweets with toxic content that are directed at any specific candidate.

Characterizing Twitter Users Who Engage in Adversarial Interactions against Political Candidates

This paper characterize users who adversarially interact with political figures on Twitter using mixed-method techniques and shows that among moderately active users, adversarial activity is associated with decreased centrality in the social graph and increased attention to candidates from the opposing party.

Sub-Linear Privacy-Preserving Near-Neighbor Search

This paper provides the first such algorithm, called Secure Locality Sensitive Indexing (SLSI) which has a sub-linear query time and the ability to handle honest-but-curious parties and provides information theoretic bound for the privacy guarantees.