Corpus ID: 221655566

Justicia: A Stochastic SAT Approach to Formally Verify Fairness

  title={Justicia: A Stochastic SAT Approach to Formally Verify Fairness},
  author={Bishwamittra Ghosh and D. Basu and Kuldeep S. Meel},
As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a \textit{stochastic satisfiability} (SSAT) framework, Justicia… Expand

Figures and Tables from this paper

Algorithmic Fairness Verification with Graphical Models
An efficient fairness verifier, called FVGM, is proposed that encodes the correlations among features as a Bayesian network that leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art. Expand
BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
A quantitative verification framework for Binarized Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural networks, is developed by encoding BNNs into Binary Decision Diagrams (BDDs), which is done by exploiting the internal structure of the Bnns. Expand


Probabilistic verification of fairness properties via concentration
A scalable algorithm for verifying fairness specifications is designed that obtains strong correctness guarantees based on adaptive concentration inequalities; such inequalities enable the algorithm to adaptively take samples until it has enough data to make a decision. Expand
FairSquare: probabilistic verification of program fairness
This work presents FairSquare, the first verification tool for automatically certifying that a program meets a given fairness property, and designs a novel technique for verifying probabilistic properties that admits a wide class of decision-making programs. Expand
Fairness Constraints: Mechanisms for Fair Classification
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy. Expand
FAHT: An Adaptive Fairness-aware Decision Tree Classifier
This paper introduces a learning mechanism to design a fair classifier for online stream based decision-making, an extension of the well-known Hoeffding Tree algorithm for decision tree induction over streams, that also accounts for fairness. Expand
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness. Expand
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. Expand
Fair Forests: Regularized Tree Induction to Minimize Model Bias
This paper develops, to their knowledge, the first technique for the induction of fair decision trees and introduces new measures for fairness which are able to handle multinomial and continues attributes as well as regression problems, as opposed to binary attributes and labels only. Expand
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to theExpand
Achieving Differential Privacy and Fairness in Logistic Regression
This work develops differentially private and fair logistic regression models by combining functional mechanism and decision boundary fairness in a joint form and demonstrates their approaches effectively achieve both differential privacy and fairness while preserving good utility. Expand
Fairness through awareness
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand