Counterfactual Explanations for Neural Recommenders

@article{Tran2021CounterfactualEF,
  title={Counterfactual Explanations for Neural Recommenders},
  author={Khanh Tran and Azin Ghazimatin and Rishiraj Saha Roy},
  journal={Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2021}
}
While neural recommenders have become the state-of-the-art in recent years, the complexity of deep models still makes the generation of tangible explanations for end users a challenging problem. Existing methods are usually based on attention distributions over a variety of features, which are still questionable regarding their suitability as explanations, and rather unwieldy to grasp for an end user. Counterfactual explanations based on a small set of the user's own actions have been shown to… 

Figures and Tables from this paper

On the Relationship between Counterfactual Explainer and Recommender: A Framework and Preliminary Observations

This work presents a general framework for both DNN and non-DNN models so that the counterfactual explainers all belong to it with specific choices of components, and analyzes the relationship between the performance of the recommender and the quality of the explainer.

From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems

The dilemma between recommendation and explainability is investigated, and it is shown that by utilizing the contextual features (e.g., item reviews from users), a series of explainable recommender systems without sacrificing their performance are proposed.

Learning to Rank Rationales for Explainable Recommendation

A model named Semantic-Enhanced Bayesian Personalized Explanation Ranking (SE-BPER), which first initializes the latent factor representations with contextualized embeddings generated by transformer model, then optimizes them with the interaction data and concludes that the optimal way to combine semantic and interaction information remains an open question in the task of rationales ranking.

Learning to Counterfactually Explain Recommendations

This work proposes a learning-based framework to generate counterfactual explanations and shows it can generate explanations that are more counterfactually valid and more satisfactory considered by users.

Counterfactually Evaluating Explanations in Recommender Systems

It is shown that, compared to conventional methods, the proposed evaluation method can produce evaluation scores more correlated with the real human judgments, and therefore can serve as a better proxy for human evaluation.

Counterfactual Review-based Recommendation

This paper proposes to improve review-based recommendation by counterfactually augmenting the training samples, and proposes two strategies --- constrained feature perturbation and frequency-based sampling --- to equip the model.

CLEAR: Causal Explanations from Attention in Neural Recommenders

Using empirical evaluations, it is shown that, compared to naively using attention weights to explain input-output relations, counterfactual explanations found by CLEAR are shorter and an alternative recommendation is ranked higher in the original top-k recommendations.

Reinforced Path Reasoning for Counterfactual Explainable Recommendation

A novel Counterfactual Explainable Recommendation (CERec) is proposed to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance and reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.

Counterfactual Explainable Recommendation

This paper proposes Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation and applies it on a black-box recommender system and evaluates the generated explanations on five real-world datasets.

Causal Inference for Recommendation: Foundations, Methods and Applications

In this survey, the fundamental concepts of both recommender systems and causal inference as well as their relationship are discussed, and the existing work on causal methods for different problems inRecommender systems are reviewed.

References

SHOWING 1-10 OF 35 REFERENCES

PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

PRINCE is presented: a provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item.

Deep Critiquing for VAE-based Recommender Systems

This paper proposes a significantly improved method for multi-step deep critiquing based recommender systems based on the VAE framework and allows users to critique the generated explanations to refine their personalized recommendations.

ELIXIR: Learning from User Feedback on Explanations to Improve Recommender Models

A human-in-the-loop framework, called Elixir, where user feedback on explanations is leveraged for pairwise learning of user preferences, overcoming sparseness by label propagation with item-similarity-based neighborhoods.

Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation

This work proposes a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items.

Coevolutionary Recommendation Model: Mutual Learning between Ratings and Reviews

A novel deep learning recommendation model is presented, which co-learns user and item information from ratings and customer reviews, by optimizing matrix factorization and an attention-based GRU network and shows a significant improvement in recommendation performance.

Neural Collaborative Filtering

This work strives to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback, and presents a general framework named NCF, short for Neural network-based Collaborative Filtering.

Explaining machine learning classifiers through diverse counterfactual explanations

This work proposes a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes, and provides metrics that enable comparison ofcounterfactual-based methods to other local explanation methods.

Neural Attentional Rating Regression with Review-level Explanations

A novel attention mechanism to explore the usefulness of reviews, and a Neural Attentional Regression model with Review-level Explanations (NARRE) for recommendation that consistently outperforms the state-of-the-art recommendation approaches in terms of rating prediction.

Deep Learning based Recommender System: A Survey and New Perspectives

A taxonomy of deep learning based recommendation models is provided and a comprehensive summary of the state-of-the-art is provided, along with providing new perspectives pertaining to this new exciting development of the deep learning in recommender system.

Incorporating Interpretability into Latent Factor Models via Fast Influence Analysis

This work proposes a novel explanation method named FIA (Fast Influence Analysis) to understand the prediction of trained LFMs by tracing back to the training data with influence functions and presents how to employ influence functions to measure the impact of historical user-item interactions on the prediction results of LFMs.