Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

@article{Yee2021ImageCO,
  title={Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency},
  author={Kyra Yee and Uthaipon Tao Tantipongpipat and Shubhanshu Mishra},
  journal={Proceedings of the ACM on Human-Computer Interaction},
  year={2021},
  volume={5},
  pages={1 - 24}
}
Twitter uses machine learning to crop images, where crops are centered around the part predicted to be the most salient. In fall 2020, Twitter users raised concerns that the automated image cropping system on Twitter favored light-skinned over dark-skinned individuals, as well as concerns that the system favored cropping woman's bodies instead of their heads. In order to address these concerns, we conduct an extensive analysis using formalized group fairness metrics. We find systematic… 

Figures and Tables from this paper

Causal Inference Struggles with Agency on Online Platforms
TLDR
Four large-scale within-study comparisons on Twitter are conducted aimed at assessing the effectiveness of observational studies derived from user self-selection on online platforms, suggesting that observational studiesderived from userSelf-selection are a poor alternative to randomized experimentation on online Platforms.
Ethical and social risks of harm from Language Models
TLDR
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs) by analyzing a wide range of established and anticipated risks, drawing on multidisciplinary literature from computer science, linguistics, and social sciences.
Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies
“Algorithmic audits” have been embraced as tools to investigate the functioning and consequences of sociotechnical systems. Though the term is used somewhat loosely in the algorithmic context and

References

SHOWING 1-10 OF 115 REFERENCES
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
TLDR
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Racial categories in machine learning
TLDR
By preceding group fairness interventions with unsupervised learning to dynamically detect patterns of segregation, machine learning systems can mitigate the root cause of social disparities, social segregation and stratification, without further anchoring status categories of disadvantage.
CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research
TLDR
This work records eye movements of 120 observers while they freely viewed a large number of naturalistic and artificial images, which opens new challenges for the next generation of saliency models and helps conduct behavioral studies on bottom-up visual attention.
Saliency Prediction in the Deep Learning Era: Successes, Limitations, and Future Challenges
TLDR
A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets and factors that contribute to the gap between models and humans are identified.
Quantitative Analysis of Automatic Image Cropping Algorithms: A Dataset and Comparative Study
TLDR
This work conducts an extensive study on traditional approaches as well as ranking-based croppers trained on various image features, and a new dataset consisting of high quality cropping and pairwise ranking annotations is presented to evaluate the performance of various baselines.
How We've Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis
TLDR
It is found that the majority of image databases rarely contain underlying source material for how race and gender identities are defined and annotated, and that the lack of critical engagement with this nature renders databases opaque and less trustworthy.
Saliency Based Image Cropping
TLDR
This paper presents an extended version of the previously proposed method, to extract the saliency map of an image, which is based on the analysis of the distribution of the interest points of the image.
Fairness and Abstraction in Sociotechnical Systems
TLDR
This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.
Fairness, Equality, and Power in Algorithmic Decision-Making
TLDR
This work argues that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by "merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences.
SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks
TLDR
This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN), which leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition.
...
1
2
3
4
5
...