• Corpus ID: 49555068

Posthoc Interpretability of Learning to Rank Models using Secondary Training Data

@article{Singh2018PosthocIO,
  title={Posthoc Interpretability of Learning to Rank Models using Secondary Training Data},
  author={Jaspreet Singh and Avishek Anand},
  journal={ArXiv},
  year={2018},
  volume={abs/1806.11330}
}
Predictive models are omnipresent in automated and assisted decision making scenarios. [] Key Method We operate on the notion of interpretability based on explainability of rankings over an interpretable feature space. Furthermore we train a tree based model (inherently interpretable) using labels from the ranker, called secondary training data to provide explanations. Consequently, we attempt to study how well does a subset of features, potentially interpretable, explain the full model under different…

Tables from this paper

Interpretable Learning-to-Rank with Generalized Additive Models

This paper lays the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks and proposes a novel formulation of ranking GAMs which can achieve significantly better performance than other traditional GAM baselines while maintaining similar interpretability.

Extractive Explanations for Interpretable Text Ranking

This paper introduces the Select-And-Rank paradigm for document ranking, where an explanation is output as a selected subset of sentences in a document and solely uses the explanation or selection to make the prediction, making explanations first-class citizens in the ranking process.

Valid Explanations for Learning to Rank Models

This paper proposes a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to a ranking decision, and introduces new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness.

Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models

This paper proposes a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query, and introduces new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features.

Interpretable Ranking with Generalized Additive Models

This paper lays the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks and proposes a novel formulation of ranking GAMs, which can outperform other traditional GAM baselines while maintaining similar interpretability.

Learnt Sparsity for Effective and Interpretable Document Ranking

This paper introduces the select and rank paradigm for document ranking, where interpretability is explicitly ensured when scoring longer documents, and treats sentence selection as a latent variable trained jointly with the ranker from the final output.

EXS: Explainable Search Using Local Model Agnostic Interpretability

ExS is a search system designed specifically to provide its users with insight into the following questions: "What is the intent of the query according to the ranker?'', "Why is this document ranked higher than another?'' and "Why was this document relevant to the query?''.

Model agnostic interpretability of rankers via intent modelling

A model-agnostic approach that attempts to locally approximate a complex ranker by using a simple ranking model in the term space and a simple term based ranker that can faithfully and accurately mimic the complex blackbox ranker in that locality is proposed.

Interpreting search result rankings through intent modeling

This paper takes first steps towards a framework for the interpretability of retrieval models with the aim of answering 3 main questions "What is the intent of the query according to the ranker?", "Why is a document ranked higher than another for the query?" and "WhyIs a document relevant to the query?".

Explaining Black Box Models for Document Retrieval

This study proposes an alternative method of analyzing the global behavior of ranking models through the aggregation of model agnostic local linear explanations, using a LambdaMART model trained on a eighteen-feature dataset.

References

SHOWING 1-8 OF 8 REFERENCES

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Auditing Black-box Models by Obscuring Features

A class of techniques originally developed for the detection and repair of disparate impact in classification models can be used to study the sensitivity of any model with respect to any feature subsets, and does not require the black-box model to be retrained.

Theory of Disagreement-Based Active Learning

Recent advances in the understanding of the theoretical benefits of active learning are described, and implications for the design of effective active learning algorithms are described.

Introducing LETOR 4.0 Datasets

LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version

European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"

It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.

Stochastic gradient boosted distributed decision trees

Two different distributed methods that generates exact stochastic GBDT models are presented, the first is a MapReduce implementation and the second utilizes MPI on the Hadoop grid environment.

Adversarial learning

This paper introduces the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks, and presents efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features.

The mythos of model interpretability

In machine learning, the concept of interpretability is both important and slippery, so it is important to understand how these concepts can be modified.