FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization

@article{Mobasher2020FairUMAP2T,
  title={FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization},
  author={Bamshad Mobasher and Styliani Kleanthous and Michael D. Ekstrand and Bettina Berendt and Jahna Otterbacher and Avital Shulner Tal},
  journal={Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization},
  year={2020}
}
The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand. 

Open Player Modeling: Empowering Players through Data Transparency

The design space of Open Player Models is defined and exciting open problems that the games research community can explore are presented and the potential value of this approach is discussed.

Ethics of AI in Education: Towards a Community-Wide Framework

While Artificial Intelligence in Education (AIED) research has at its core the desire to support student learning, experience from other AI domains suggest that such ethical intentions are not by

Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

This paper claims that fair privacy and need-to-know are desirable properties of a decision system and shows that for an optimal classifier these three properties are in general incompatible, and what common properties of data make them incompatible.

Designing Recommender Systems for the Common Good

To address challenges of personalized public services, it is claimed that to address these challenges, two best practices in the design of digital public services can be learned: participatory design and open data.

How YouTube Leads Privacy-Seeking Users Away from Reliable Information

It is confirmed that YouTube's recommendations generally "lead away" from reliable information sources, with a tendency to direct users over time toward video channels exposing extreme and unscientific viewpoints, and shown that there is a fundamental tension between user privacy and extreme recommendations.

Mitigating Demographic Bias in AI-based Resume Filtering

A simple technique, called fair-tf-idf, is developed to match resumes with job descriptions in a fair way by mitigating the socio-linguistic bias on resume to job description matching algorithms.

Emotion-based Stereotypes in Image Analysis Services

Evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the "angry black man" and often attribute black race individuals with "emotions of hostility" is documented.