Corpus ID: 207767047

Unfairness towards subjective opinions in Machine Learning

@article{Balayn2019UnfairnessTS,
  title={Unfairness towards subjective opinions in Machine Learning},
  author={Agathe Balayn and Alessandro Bozzon and Zolt{\'a}n Szl{\'a}vik},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.02455}
}
Despite the high interest for Machine Learning (ML) in academia and industry, many issues related to the application of ML to real-life problems are yet to be addressed. Here we put forward one limitation which arises from a lack of adaptation of ML models and datasets to specific applications. We formalise a new notion of unfairness as exclusion of opinions. We propose ways to quantify this unfairness, and aid understanding its causes through visualisation. These insights into the functioning… Expand

References

SHOWING 1-10 OF 23 REFERENCES
Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation
TLDR
This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence. Expand
Learning From Crowds
TLDR
A probabilistic approach for supervised learning when the authors have multiple annotators providing (possibly noisy) labels but no absolute gold standard, and experimental results indicate that the proposed method is superior to the commonly used majority voting baseline. Expand
On formalizing fairness in prediction with machine learning
TLDR
This article surveys how fairness is formalized in the machine learning literature for the task of prediction and presents these formalizations with their corresponding notions of distributive justice from the social sciences literature. Expand
MicroTalk: Using Argumentation to Improve Crowdsourcing Accuracy
TLDR
This paper presents a new quality-control workflow that requires some workers to Justify their reasoning and asks others to Reconsider their decisions after reading counter-arguments from workers with opposing views, which produces much higher accuracy than simpler voting approaches for a range of budgets. Expand
Ex Machina: Personal Attacks Seen at Scale
TLDR
A method that combines crowdsourcing and machine learning to analyze personal attacks at scale is developed and illustrated, and an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate is shown. Expand
Crowd Truth: Harnessing disagreement in crowdsourcing a relation extraction gold standard
Copyright is held by the author/owner(s). WebSci-13, May 2-4, 2013, Paris, France. ACM 978-1-4503-1889-1. Abstract One of the first steps in most web data analytics is creating a human annotatedExpand
Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets
TLDR
Revolt eliminates the burden of creating detailed label guidelines by harnessing crowd disagreements to identify ambiguous concepts and create rich structures (groups of semantically related items) for post-hoc label decisions. Expand
The Authority of "Fair" in Machine Learning
TLDR
It is argued for the adoption of a normative definition of fairness within the machine learning community and ways to incorporate a broader community and generate further debate around how to decide what is fair in ML are suggested. Expand
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
TLDR
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. Expand
Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise
TLDR
A probabilistic model is presented and it is demonstrated that the model outperforms the commonly used "Majority Vote" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers. Expand
...
1
2
3
...