Personal Attribute Prediction from Conversations

@article{Liu2022PersonalAP,
  title={Personal Attribute Prediction from Conversations},
  author={Yinan Liu and Hu Chen and Wei Shen},
  journal={Companion Proceedings of the Web Conference 2022},
  year={2022}
}
Personal knowledge bases (PKBs) are critical to many applications, such as Web-based chatbots and personalized recommendation. Conversations containing rich personal knowledge can be regarded as a main source to populate the PKB. Given a user, a user attribute, and user utterances from a conversational system, we aim to predict the personal attribute value for the user, which is helpful for the enrichment of PKBs. However, there are three issues existing in previous studies: (1) manually… 

Figures and Tables from this paper

Low-resource Personal Attribute Prediction from Conversation

A novel framework PEARL is proposed to predict personal attributes from conversations by leveraging the abun- dant personal attribute knowledge from utterances under a low-resource setting in which no labeled utterances or exter- nal data are utilized.

References

SHOWING 1-10 OF 19 REFERENCES

Listening between the Lines: Learning Personal Attributes from Conversations

This work proposes methods for inferring personal attributes, such as profession, age or family status, from conversations using deep learning using several Hidden Attribute Models, which are neural networks leveraging attention mechanisms and embeddings.

Personal knowledge graph population from user utterances in conversational understanding

A statistical language understanding approach to automatically construct personal (user-centric) knowledge graphs in conversational dialogs to better understand the users' requests, fulfilling them, and enabling other technologies such as developing better inferences or proactive interactions is introduced.

CHARM: Inferring Personal Attributes from Conversations

CHARM is a zero-shot learning method that creatively leverages keyword extraction and document retrieval in order to predict attribute values that were never seen during training.

Age Inference Using A Hierarchical Attention Neural Network

A hierarchical attention neural model is proposed that integrates independent linguistic knowledge gained from text and emojis when making a prediction and is able to capture the intra-post relationship between these different post components, as well as the inter-post relationships of a user's posts.

Text Classification Using Label Names Only: A Language Model Self-Training Approach

This paper uses pre-trained neural language models both as general linguistic knowledge sources for category understanding and as representation learning models for document classification, and achieves around 90% accuracy on four benchmark datasets.

Predicting Named Entity Location Using Twitter

NELPT is proposed, the first unsupervised framework for Named Entity city-level Location Prediction by leveraging the geographical location knowledge from Twitter by leveraging a Linear Neural Network model as the predictive model combining two categories of information.

End-to-End Neural Ad-hoc Ranking with Kernel Pooling

K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score.

The Probabilistic Relevance Framework: BM25 and Beyond

This work presents the PRF from a conceptual point of view, describing the probabilistic modelling assumptions behind the framework and the different ranking algorithms that result from its application: the binary independence model, relevance feedback models, BM25 and BM25F.

An analysis of the user occupational class through Twitter content

This study focuses on the prediction of the occupational class for a public user profile on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes to confirm the feasibility of the approach in inferring a new user attribute that can be embedded in a multitude of downstream applications.

N-GrAM: New Groningen Author-profiling Model

The aim was to create a single model for both gender and language, and for all language varieties, which is a linear support vector machine (SVM) with word unigrams and character 3- to 5-grams as features.