Team DoNotDistribute at SemEval-2020 Task 11: Features, Finetuning, and Data Augmentation in Neural Models for Propaganda Detection in News Articles
@article{Kranzlein2020TeamDA, title={Team DoNotDistribute at SemEval-2020 Task 11: Features, Finetuning, and Data Augmentation in Neural Models for Propaganda Detection in News Articles}, author={Michael Kranzlein and Shabnam Behzad and Nazli Goharian}, journal={ArXiv}, year={2020}, volume={abs/2008.09703} }
This paper presents our systems for SemEval 2020 Shared Task 11: Detection of Propaganda Techniques in News Articles. We participate in both the span identification and technique classification subtasks and report on experiments using different BERT-based models along with handcrafted features. Our models perform well above the baselines for both tasks, and we contribute ablation studies and discussion of our results to dissect the effectiveness of different features and techniques with the…
One Citation
SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles
- Computer ScienceSEMEVAL
- 2020
The results and the main findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles are presented and the system submissions and the methods they used are discussed.
References
SHOWING 1-10 OF 25 REFERENCES
Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels
- Computer ScienceEMNLP
- 2019
This paper presents the CUNLP submission for the NLP4IF 2019 shared-task on Fine-Grained Propaganda Detection, and presents their models, a discussion of the authors' ablation studies and experiments, and an analysis of their performance on all eighteen propaganda techniques present in the corpus of the shared task.
NSIT@NLP4IF-2019: Propaganda Detection from News Articles using Transfer Learning
- Computer ScienceEMNLP
- 2019
The main contribution of the work is to evaluate the effectiveness of various transfer learning approaches like ELMo, BERT, and RoBERTa for propaganda detection.
Pretrained Ensemble Learning for Fine-Grained Propaganda Detection
- Computer ScienceEMNLP
- 2019
This team’s effort on the fine-grained propaganda detection on sentence level classification (SLC) task of NLP4IF 2019 workshop co-located with the EMNLP-IJCNLP 2019 conference is described.
CAUnLP at NLP4IF 2019 Shared Task: Context-Dependent BERT for Sentence-Level Propaganda Detection
- Computer ScienceEMNLP
- 2019
This paper presents the participation in the sentence-level subtask of the propaganda detection shared task, and builds context-dependent input pairs to fine-tune the pretrained BERT and uses the undersampling method to tackle the problem of imbalanced data.
SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles
- Computer ScienceSEMEVAL
- 2020
The results and the main findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles are presented and the system submissions and the methods they used are discussed.
Neural Architectures for Fine-Grained Propaganda Detection in News
- Computer ScienceEMNLP
- 2019
This system has designed multi-granularity and multi-tasking neural architectures to jointly perform both the sentence and fragment level propaganda detection and investigates different ensemble schemes such as majority-v voting, relax-voting, etc. to boost overall system performance.
JUSTDeep at NLP4IF 2019 Task 1: Propaganda Detection using Ensemble Deep Learning Models
- Computer ScienceEMNLP
- 2019
This research paper provides an ensemble deep learning model using BiLSTM, XGBoost, and BERT to detect propaganda and it shows a significant performance over the baseline model.
Sentence-Level Propaganda Detection in News Articles with Transfer Learning and BERT-BiLSTM-Capsule Model
- Computer ScienceEMNLP
- 2019
The proposed solution relies on a unified neural network, which consists of several deep leaning modules, namely BERT, BiLSTM and Capsule, to solve the sentencelevel propaganda classification problem and takes a pre-training approach on a somewhat similar task (i.e., emotion classification) improving results against the cold-start model.
Understanding BERT performance in propaganda analysis
- Computer ScienceEMNLP
- 2019
It is shown that despite the high performance on the given testset, the system used in the shared task for fine-grained propaganda analysis at sentence level may have the tendency of classifying opinion pieces as propaganda and cannot distinguish quotations of propaganda speech from actual usage of propaganda techniques.
Divisive Language and Propaganda Detection using Multi-head Attention Transformers with Deep Learning BERT-based Language Models for Binary Classification
- Computer ScienceEMNLP
- 2019
On the NLP4IF 2019 sentence level propaganda classification task, we used a BERT language model that was pre-trained on Wikipedia and BookCorpus as team ltuorp ranking #1 of 26. It uses deep learning…