• Corpus ID: 92990599

Leveraging User Engagement Signals For Entity Labeling in a Virtual Assistant

@article{Muralidharan2019LeveragingUE,
  title={Leveraging User Engagement Signals For Entity Labeling in a Virtual Assistant},
  author={Deepak Muralidharan and Justine T. Kao and Xiao Yang and Lin Li and Lavanya Viswanathan and Mubarak Seyed Ibrahim and Kevin Luikens and Stephen G. Pulman and Ashish Garg and Atish Kothari and Jason Williams},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.09143}
}
Personal assistant AI systems such as Siri, Cortana, and Alexa have become widely used as a means to accomplish tasks through natural language commands. However, components in these systems generally rely on supervised machine learning algorithms that require large amounts of hand-annotated training data, which is expensive and time consuming to collect. The ability to incorporate unsupervised, weakly supervised, or distantly supervised data holds significant promise in overcoming this… 

Figures and Tables from this paper

A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
TLDR
This work proposes a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback, with an insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred.
Data-Efficient Paraphrase Generation to Bootstrap Intent Classification and Slot Labeling for New Features in Task-Oriented Dialog Systems
TLDR
This paper proposes a new, data-efficient approach using an interpretation-to-text model for paraphrase generation, and, in combination with shuffling-based sampling techniques, can obtain diverse and novel paraphrases from small amounts of seed data.
Leveraging User Paraphrasing Behavior In Dialog Systems To Automatically Collect Annotations For Long-Tail Utterances
TLDR
MARUPA creates new data in a fully automatic way, without manual intervention or effort from annotators, and specifically for currently failing utterances, by re-training the dialog system on this new data, accuracy and coverage for long-tail utterances can be improved.
Multilingual Paraphrase Generation For Bootstrapping New Features in Task-Oriented Dialog Systems
TLDR
A multilingual paraphrase generation model that can be used to generate novel utterances for a target feature and target language and which shows promise across languages, even in a zero-shot setting where no seed data is available is proposed.
Search based Self-LearningQuery Rewrite System in Conversational AI
TLDR
This work proposes a search-based self-learning QR framework, UFS-QR, which focuses on automatic reduction of user friction for large scale conversational AI agents and demonstrates the effectiveness of the UFSQR system, trained without any annotated data, through offline and online A/B experiment on Amazon Alexa user traffic.
Using Pause Information for More Accurate Entity Recognition
TLDR
It is demonstrated that the linguistic observation on pauses can be used to improve accuracy in machine-learnt language understanding tasks and proposed novel embeddings improve the relative error rate by up to 8% consistently across three domains for French, without any added annotation or alignment costs to the parser.
Feedback Attribution for Counterfactual Bandit Learning in Multi-Domain Spoken Language Understanding
TLDR
This paper introduces an experimental setup to simulate the feedback attribution problem that arises when using counterfactual bandit learning for multi-domain spoken language understanding, proposes attribution methods inspired by multi-agent reinforcement learning and proposes methods to allow training competitive models from user feedback.
When a Voice Assistant Asks for Feedback: An Empirical Study on Customer Experience with A/B Testing and Causal Inference Methods
TLDR
This paper attempts to quantify the CX of providing feedback, identify the driving factors of CX and offer insights into improving CX with the drivers identified, and performs causal inference with Double Machine Learning.
Beyond Turing: Intelligent Agents Centered on the User
TLDR
This work looks at the origins of agent-centric research for slot-filling, gaming and chatbot agents and argues that it is important to concentrate more on the user.
Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges
TLDR
The participants defined the main research areas within the domain of intelligent agents and they discussed the major future directions that the research in each area of this domain should take.

References

SHOWING 1-10 OF 10 REFERENCES
Leveraging Knowledge Bases in LSTMs for Improving Machine Reading
TLDR
KBLSTM is proposed, a novel neural model that leverages continuous representations of KBs to enhance the learning of recurrent neural networks for machine reading and achieves accuracies that surpass the previous state-of-the-art results for both entity extraction and event extraction on the widely used ACE2005 dataset.
An Exploration of Three Lightly-supervised Representation Learning Approaches for Named Entity Classification
TLDR
This work is the first to adapt three semi-supervised representation learning methods to an information extraction task, specifically, named entity classification, and finds that one of the best performers relies on the mean teacher framework, a simple teacher/student approach that is independent of the underlying task-specific model.
Unsupervised Models for Named Entity Classification
TLDR
It is shown that the use of unlabeled data can reduce the requirements for supervision to just 7 simple "seed" rules, gaining leverage from natural redundancy in the data.
Dialogue manager domain adaptation using Gaussian process reinforcement learning
Distributed Representations of Words to Guide Bootstrapped Entity Classifiers
TLDR
This work uses the word vectors to expand entity sets used for training classifiers in a bootstrapped pattern-based entity extraction system, and shows that the classifiers trained with the expanded sets perform better on entity extraction from four online forums.
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
POMDP-Based Statistical Spoken Dialog Systems: A Review
TLDR
This review article provides an overview of the current state of the art in the development of POMDP-based spoken dialog systems.
Improved Pattern Learning for Bootstrapped Entity Extraction
TLDR
This paper uses various unsupervised features based on contrasting domain-specific and general text, and exploiting distributional similarity and edit distances to learned entities to improve pattern scoring.
Distributed Representations of Words and Phrases and their Compositionality
TLDR
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Bidirectional LSTM Networks for Improved Phoneme Classification and Recognition
TLDR
In this paper, two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory networks are carried out and it is found that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system.