• Corpus ID: 233168618

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

  title={VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations},
  author={Archit Rathore and Sunipa Dev and J. M. Phillips and Vivek Srikumar and Yan-luan Zheng and Chin-Chia Michael Yeh and Junpeng Wang and Wei Zhang and Bei Wang},
Word vector embeddings have been shown to contain and amplify biases in data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present Visualization of Embedding Representations for deBiasing system (“VERB”), an open-source… 

Figures and Tables from this paper

A Visual Tour of Bias Mitigation Techniques for Word Representations

To help understand how various debiasing techniques change the underlying geometry, this tutorial decomposes each technique into interpretable sequences of primitive operations, and study their effect on the word vectors using dimensionality reduction and interactive visual exploration.

OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings

OSCaR (Orthogonal Subspace Correction and Rectification), a bias-mitigating method that focuses on disentangling biased associations between concepts instead of removing concepts wholesale, is proposed.

Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing

This work designs a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics, and shows that each dimension of the regularized latent space is more semantically salient, and validate the effectiveness of the embedding regularization and interpretation approach.

Visual Text Analytics

The use of visual analytics applied to text data as a means to bridge the complementary strengths of people and computers is explored in Dagstuhl Seminar 22191.

AI Fairness: from Principles to Practice

This paper summarizes and evaluates various approaches, methods, and techniques for pursuing fairness in artificial intelligence (AI) systems, and offers techniques for evaluating the costs and benefits of fairness targets, and defines the role of human judgment in setting these targets.



Visual Exploration of Semantic Relationships in Neural Word Embeddings

New embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures are introduced, to address a number of domain-specific tasks difficult to solve with existing tools.

Interactive Analysis of Word Vector Embeddings

This paper provides a literature survey to catalogue the range of tasks where the embeddings are employed across a broad range of applications, and presents visual interactive designs that address many of these tasks.

VisExPreS: A Visual Interactive Toolkit for User-Driven Evaluations of Embeddings

VisExPreS, a visual interactive toolkit that enables a user-driven assessment of low-dimensional embeddings from different dimensionality reduction algorithms, is presented, based on three novel techniques namely PG-LAPS, PG-GAPS, and RepSubset that generate interpretable explanations of the preserved local and global structures inembeddings.

Attenuating Bias in Word Vectors

New simple ways to detect the most stereotypically gendered words in an embedding and remove the bias from them are explored and it is verified how names are masked carriers of gender bias and then used as a tool to attenuate bias in embeddings.

OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings

OSCaR (Orthogonal Subspace Correction and Rectification), a bias-mitigating method that focuses on disentangling biased associations between concepts instead of removing concepts wholesale, is proposed.

embComp: Visual Interactive Comparison of Vector Embeddings

This paper introduces embComp, a novel approach for comparing twoembeddings that capture the similarity between objects, such as word and document embeddings, and surveys scenarios where comparing these embedding spaces is useful, and assesses it by applying it in several use cases.

On Measuring and Mitigating Biased Inferences of Word Embeddings

A mechanism for measuring stereotypes using the task of natural language inference is designed and a reduction in invalid inferences via bias mitigation strategies on static word embeddings (GloVe), and it is shown that for gender bias, these techniques extend to contextualizedembeddings when applied selectively only to the static components of contextualized embeddeds.

DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces

DebIE is presented, the first integrated platform for measuring and mitigating bias in word embeddings and can compute several measures of implicit and explicit bias and modify the embedding space by executing two (mutually composable) debiasing models.

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society, causing serious concern. Several recent

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

This work empirically demonstrates that its algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks.