Factual and Informative Review Generation for Explainable Recommendation

  title={Factual and Informative Review Generation for Explainable Recommendation},
  author={Zhouhang Xie and Sameer Singh and Julian McAuley and Bodhisattwa Prasad Majumder},
Recent models can generate fluent and grammatical synthetic reviews while accurately predicting user ratings. The generated reviews, expressing users’ estimated opinions towards related products, are often viewed as natural language ‘ra-tionales’ for the jointly predicted rating. However, previous studies found that existing models often generate repetitive, universally applicable, and generic explanations, resulting in uninformative rationales. Further, our analysis shows that previous models… 

Figures and Tables from this paper



Improving Personalized Explanation Generation through Visualization

A visually-enhanced approach named METER is proposed with the help of visualization generation and text–image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation.

Hidden factors and hidden topics: understanding rating dimensions with review text

This paper aims to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics ( such as those learned by topic models like LDA), which more accurately predicts product ratings by harnessing the information present in review text.

Personalized Prompt Learning for Explainable Recommendation

Inspired by recent advancement in prompt learning, this work comes up with two solutions: find alternative words to represent IDs, and directly input ID vectors to a pre-trained model (termed continuous prompt learning).

Personalized Transformer for Explainable Recommendation

A PErsonalized Transformer for Explainable Recommendation (PETER), on which a simple and effective learning objective is designed that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer.

Generate Neural Template Explanations for Recommendation

Experimental results on real-world datasets show that NETE consistently outperforms state-of-the-art explanation generation approaches in terms of sentence quality and expressiveness and analysis on case study shows the advantages of NETE on generating diverse and controllable explanations.

Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space

This paper proposes the first large-scale language VAE model, Optimus, a universal latent embedding space for sentences that is first pre-trained on large text corpus, and then fine-tuned for various language generation and understanding tasks.

Neural Rating Regression with Abstractive Tips Generation for Recommendation

A deep learning based framework named NRT is proposed which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings.

Learning to Generate Product Reviews from Attributes

An attention-enhanced attribute-to-sequence model to generate product reviews for given attribute information, such as user, product, and rating, and an attention mechanism to jointly generate reviews and align words with input attributes is presented.

Rethinking and Refining the Distinct Metric

This work refine the calculation of distinct scores by scaling the number of distinct tokens based on their expectations, and shows that the proposed metric, Expectation-Adjusted Distinct (EAD), correlates better with human judgment in evaluating response diversity.

Entity-Based Knowledge Conflicts in Question Answering

The findings demonstrate the importance for practitioners to evaluate model tendency to hallucinate rather than read, and show that the mitigation strategy encourages generalization to evolving information (i.e. time-dependent queries).