How to make latent factors interpretable by feeding Factorization machines with knowledge graphs

@article{Anelli2019HowTM,
  title={How to make latent factors interpretable by feeding Factorization machines with knowledge graphs},
  author={Vito Walter Anelli and T. D. Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.05038}
}
Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to… 

Semantic Interpretation of Top-N Recommendations

TLDR
This paper shows how to initialize latent factors in Factorization Machines by using semantic features coming from knowledge graphs to train an interpretable model, which is, in turn, able to provide recommendations with a high level of accuracy.

Semantic interpretability of latent factors for recommendation

TLDR
This extended abstract shows how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model.

Knowledge-Aware Interpretable Recommender Systems

TLDR
This chapter describes two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results.

Sparse Feature Factorization for Recommender Systems with Knowledge Graphs

TLDR
KGFlex is presented: a sparse factorization approach that grants an even greater degree of expressiveness and an extensive experimental evaluation shows the approach’s effectiveness, considering the recommendation results’ accuracy, diversity, and induced bias.

Knowledge-aware Recommendations Based on Neuro-Symbolic Graph Embeddings and First-Order Logical Rules

TLDR
The results show that the combination of KG embeddings and FOL rules led to an improvement in the accuracy and in the novelty of the recommendations.

Together is Better: Hybrid Recommendations Combining Graph Embeddings and Contextualized Word Representations

TLDR
A hybrid recommendation framework based on the combination of graph embeddings and contextual word representations that overcomes several competitive baselines and shows that the use of a hybrid representation leads to an improvement of the predictive accuracy.

Recommender Systems Based on Graph Embedding Techniques: A Review

TLDR
This article systematically retrospects graph embedding-based recommendation from embedding techniques for bipartite graphs, general graphs and knowledge graphs, and proposes a general design pipeline of that, and manifests that the conventional models can still overall outperform the graph embeding-based ones in predicting implicit user-item interactions.

INK: knowledge graph embeddings for node classification

TLDR
This paper presents INK: Instance Neighbouring by using Knowledge, a novel technique to learn binary feature-based representations, which are comprehensible to humans, for nodes of interest in a knowledge graph.

Knowledge-enhanced Shilling Attacks for Recommendation

TLDR
This work introduces SAShA, a new attack strategy that leverages semantic features extracted from a knowledge graph in order to strengthen the efficacy of the attack against standard CF models and underline the vulnerability of well-known CF models against the proposed semantic attacks.

Fourth Knowledge-aware and Conversational Recommender Systems Workshop (KaRS)

TLDR
Although very effective in predicting relevant items, collaborative approaches miss some very interesting features that go beyond the accuracy of results and move in the direction of providing novel and diverse results as well as generating explanations for recommended items.

References

SHOWING 1-10 OF 49 REFERENCES

Recurrent knowledge graph embedding for effective recommendation

TLDR
RKGE is presented, a KG embedding approach that automatically learns semantic representations of both entities and paths between entities for characterizing user preferences towards items and shows the superiority of RKGE against state-of-the-art methods.

TEM: Tree-enhanced Embedding Model for Explainable Recommendation

TLDR
A novel solution named Tree-enhanced Embedding Method that combines the strengths of embedding-based and tree-based models, and at the core of the embedding method is an easy-to-interpret attention network, making the recommendation process fully transparent and explainable.

Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems

TLDR
This work proposes a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model, which mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources.

Learning to Rank Features for Recommendation over Multiple Categories

TLDR
This paper proposes a novel model called LRPPM-CF, which is able to improve the performance in the tasks of capturing users' interested features and item recommendation by about 17%-24% and 7%-13%, respectively, as compared with several state-of-the-art methods.

Addressing Interpretability and Cold-Start in Matrix Factorization for Recommender Systems

TLDR
It is shown that the recommendation accuracy of the algorithm is competitive to that of state-of-the-art matrix factorization techniques, and has the advantage of offering recommendations that are textually and visually interpretable.

Explainable Matrix Factorization for Collaborative Filtering

TLDR
This paper proposes a new Explainable Matrix Factorization (EMF) technique that computes an accurate top-$n$ recommendation list of items that are explainable and introduces new explanation quality metrics, that are called Mean Explainability Precision (MEP) and Mean explainability Recall (MER).

Pairwise interaction tensor factorization for personalized tag recommendation

TLDR
The factorization model PITF (Pairwise Interaction Tensor Factorization) is presented which is a special case of the TD model with linear runtime both for learning and prediction and shows that this model outperforms TD largely in runtime and even can achieve better prediction quality.

Fast context-aware recommendations with factorization machines

TLDR
This work proposes to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions and shows empirically that this approach outperforms Multiverse Recommendation in prediction quality and runtime.

Factorization meets the neighborhood: a multifaceted collaborative filtering model

TLDR
The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.

Explainable Restricted Boltzmann Machines for Collaborative Filtering

TLDR
This paper proposes a new Explainable RBM technique that computes the top-n recommendation list from items that are explainable, and shows that the method is effective in generating accurate and explainable recommendations.