Corpus ID: 1562573

Influence in Classification via Cooperative Game Theory

@inproceedings{Datta2015InfluenceIC,
  title={Influence in Classification via Cooperative Game Theory},
  author={Amit Datta and Anupam Datta and Ariel D. Procaccia and Yair Zick},
  booktitle={IJCAI},
  year={2015}
}
A dataset has been classified by some unknown classifier into two types of points. What were the most important factors in determining the classification outcome? In this work, we employ an axiomatic approach in order to uniquely characterize an influence measure: a function that, given a set of classified points, outputs a value for each feature corresponding to its influence in determining the classification outcome. We show that our influence measure takes on an intuitive form when the… Expand
On Feature Interactions Identified by Shapley Values of Binary Classification Games
TLDR
This work introduces the notion of classification game, a cooperative game, with features as players and hinge loss based characteristic function and relates a feature's contribution to Shapley value based error apportioning (SVEA) of total training error. Expand
A Characterization of Monotone Influence Measures for Data Classification
TLDR
A family of influence measures is identified; functions that, given a datapoint x, assign a value phi_i(x) to every feature i, which roughly corresponds to that i's importance in determining the outcome for x. Expand
If You Like Shapley Then You'll Love the Core
TLDR
It is proved that arbitrarily good approximations to the least core — a core relaxation that is always feasible — can be computed efficiently and proved an impossibility for a more refined solution concept, the nucleolus. Expand
Axiomatic Characterization of Data-Driven Influence Measures for Classification
TLDR
A family of numerical influence measures - functions that, given a datapoint x, assign a numeric value phi_i(x) to every feature i, corresponding to how altering i's value would influence the outcome for x. Expand
An Axiomatic Approach to Linear Explanations in Data Classification
TLDR
A family of measures called MIM (monotone influence measures), that are uniquely derived from a set of axioms: desirable properties that any reasonable influence measure should satisfy, are presented. Expand
Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf
TLDR
Surprisingly, it is shown that Banzhaf values offer several advantages over Shapley values while providing essentially the same explanations, allowing for more efficient algorithms and are much more numerically robust. Expand
The Shapley Taylor Interaction Index
TLDR
A generalization of the Shapley value called Shapley-Taylor index is proposed that attributes the model's prediction to interactions of subsets of features up to some size k that is analogous to how the truncated Taylor Series decomposes the function value at a certain point using its derivatives at a different point. Expand
Algorithmic Transparency via Quantitative Input Influence
TLDR
A family of Quantitative Input Influence measures that capture the degree of input influence on system outputs provide a foundation for the design of transparency reports that accompany system decisions and for testing tools useful for internal and external oversight. Expand
Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems
TLDR
The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise. Expand
Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions
TLDR
This work introduces a parametric model called cooperative game abstractions (CGAs) for estimating CFs from data and provides identification results and sample complexity bounds for CGA models as well as error bounds in the estimation of the SV using CGAs. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 36 REFERENCES
Feature Selection Based on the Shapley Value
TLDR
Empirical comparison with several other existing feature selection methods shows that the backward eliminati-nation variant of CSA leads to the most accurate classification results on an array of datasets. Expand
Monotonic solutions of cooperative games
The principle of monotonicity for cooperative games states that if a game changes so that some player's contribution to all coalitions increases or stays the same then the player's allocation shouldExpand
Computational Aspects of Cooperative Game Theory (Synthesis Lectures on Artificial Inetlligence and Machine Learning)
TLDR
The aim of this book is to present a survey of work on the computational aspects of cooperative game theory, formally defining transferable utility games in characteristic function form, and introducing key solution concepts such as the core and the Shapley value. Expand
Computational Aspects of Cooperative Game Theory
TLDR
This talk introduces basic concepts from cooperative game theory, and in particular the key solution concepts: the core and the Shapley value, and introduces the key issues that arise if one is to consider the cooperative games in a computational setting. Expand
Approximating power indices: theoretical and empirical analysis
TLDR
This work suggests and analyze randomized methods to approximate power indices such as the Banzhaf power index and the Shapley–Shubik power index, and shows that no approximation algorithm can do much better for general coalitional games than both deterministic and randomized algorithms. Expand
Three naive Bayes approaches for discrimination-free classification
TLDR
Three approaches for making the naive Bayes classifier discrimination-free are presented: modifying the probability of the decision being positive, training one model for every sensitive attribute value and balancing them, and adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization. Expand
A Value for n-person Games
Introduction At the foundation of the theory of games is the assumption that the players of a game can evaluate, in their utility scales, every “prospect” that might arise as a result of a play. InExpand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Fairness-aware Learning through Regularization Approach
TLDR
This paper discusses three causes of unfairness in machine learning and proposes a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models and applies it to logistic regression to empirically show its effectiveness and efficiency. Expand
Challenges in measuring online advertising systems
TLDR
A first principled look at measurement methodologies for ad networks is taken, which proposes new metrics that are robust to the high levels of noise inherent in ad distribution, identifies measurement pitfalls and artifacts, and provides mitigation strategies. Expand
...
1
2
3
4
...