• Corpus ID: 232478328

Mitigating Media Bias through Neutral Article Generation

@article{Lee2021MitigatingMB,
  title={Mitigating Media Bias through Neutral Article Generation},
  author={Nayeon Lee and Yejin Bang and Andrea Madotto and Pascale Fung},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.00336}
}
Media bias can lead to increased political polarization, and thus, the need for automatic mitigation methods is growing. Existing mitigation work displays articles from multiple news outlets to provide diverse news coverage, but without neutralizing the bias inherent in each of the displayed articles. Therefore, we propose a new task, a single neutralized article generation out of multiple biased articles, to facilitate more efficient access to balanced and unbiased information. In this paper… 

Figures and Tables from this paper

The Battlefront of Combating Misinformation and Coping with Media Bias

This tutorial dives into the important research questions of how to develop a robust fake news detection system, which not only fact-check information pieces provable by background knowledge but also reason about the consistency and the reliability of subtle details for emerging events.

References

SHOWING 1-10 OF 20 REFERENCES

Exposure to opposing views on social media can increase political polarization

It is found that Republicans who followed a liberal Twitter bot became substantially more conservative posttreatment, whereas Democrats exhibited slight increases in liberal attitudes after following a conservative Twitter bot, although these effects are not statistically significant.

Shedding (a Thousand Points of) Light on Biased Language

The linguistic indicators of bias in political text are considered, exploring how different groups perceive bias in different blogs, and showing some lexical indicators strongly associated with perceived bias.

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often

Linguistic Models for Analyzing and Detecting Biased Language

The analysis of real instances of human edits designed to remove bias from Wikipedia articles uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which is linked to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true.

Tanbih: Get To Know What You Are Reading

Tanbih, a news aggregator with intelligent analysis tools to help readers understanding what’s behind a news story, is introduced.

Team yeon-zi at SemEval-2019 Task 4: Hyperpartisan News Detection by De-noising Weakly-labeled Data

This paper focuses on removing the noise inherent in the hyperpartisanship dataset from both data-level and model-level by leveraging semi-supervised pseudo-labels and the state-of-the-art BERT model.

Being The New York Times: the Political Behaviour of a Newspaper

Abstract I analyse a dataset of news from The New York Times, from 1946 to 1997. Controlling for the activity of the incumbent president and the U.S. Congress across issues, I find that during a

Truth or Error? Towards systematic analysis of factual errors in abstractive summaries

Comparative analysis revealed that two neural summarization systems leveraging pre-trained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors.

Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

A novel abstractive model is proposed which is conditioned on the article’s topics and based entirely on convolutional neural networks, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.

Detecting Hallucinated Content in Conditional Neural Sequence Generation

A new task to predict whether each token in the output sequence is hallucinated conditioned on the source input, and a novel method for learning to model hallucination detection, based on pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations.