• Corpus ID: 231693319

Deepfakes and the 2020 US elections: what (did not) happen

  title={Deepfakes and the 2020 US elections: what (did not) happen},
  author={Jo{\~a}o Paulo Meneses},
Alarmed by the volume of disinformation that was assumed to have taken place during the 2016 US elections, scholars, politics and journalists predicted the worst when the first deepfakes began to emerge in 2018. After all, US Elections 2020 were believed to be the most secure in American history. This paper seeks explanations for an apparent contradiction: we believe that it was precisely the multiplication and conjugation of different types of warnings and fears that created the conditions… 


An Exploratory Study on Disinformation and Fake News Associated with the U.S. 2020 Presidential Election
With the advent of social media, the spread of misinformation is undeniable. Disinformation and more specifically fake news are types of misinformation which are devised to mislead, deceive, or
Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
The aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it.
Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studie...
A Study on Combating Emerging Threat of Deepfake Weaponization
  • R. Katarya, Anushka Lal
  • Computer Science
    2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)
  • 2020
SSTNet is presented as the best model to date, that uses spatial, temporal, and steganalysis for detection of deepfakes and the threat posed by document and signature forgery is highlighted.
Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder
Locality-Aware AutoEncoder (LAE) is proposed to bridge the generalization gap in deepfakes by using a pixel-wise mask to regularize local interpretation of LAE to enforce the model to learn intrinsic representation from the forgery region, instead of capturing artifacts in the training set and learning superficial correlations to perform detection.
Is it real? A study on detecting deepfake videos
We present an exploratory study of how people detect deepfake videos. Through watching a set of real and fake videos, and semi‐structured interviews, participants identified a set of characteristics
Deepfakes and the 2020 United States election: missing in action?
  • 2020
Deepfakes, A Grounded Threat Assessment’, 2020/07, Center for Security and Emerging Technology [https://cset.georgetown.edu/research/deepfakes-a-grounded-threatassessment
  • 2020
Deepfakes may not have upended the 2020 U.S. election, but their day is coming
  • 2020