Improving Sequential Query Recommendation with Immediate User Feedback

@article{Parambath2022ImprovingSQ,
  title={Improving Sequential Query Recommendation with Immediate User Feedback},
  author={Shameem Puthiya Parambath and Christos Anagnostopoulos and Roderick Murray-Smith},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.06297}
}
We propose an algorithm for next query recommendation in interactive data exploration settings, like knowledge discovery for information gathering. The state-of-the-art query recommendation algorithms are based on sequence-to-sequence learning approaches that exploit historical interaction data. We propose to augment the transformer-based causal language models for query recommendations to adapt to the immediate user feedback using multi-armed bandit (MAB) framework. We conduct a large-scale… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 29 REFERENCES
Max-Utility Based Arm Selection Strategy For Sequential Query Recommendations
TLDR
It is shown that in tasks like online information gathering, where sequential query recommendations are employed, the sequences of queries are correlated and the number of potentially optimal queries can be reduced to a manageable size by selecting queries with maximum utility with respect to the currently executing query.
Context Attentive Document Ranking and Query Suggestion
TLDR
A two-level hierarchical recurrent neural network is introduced to learn search context representation of individual queries, search tasks, and corresponding dependency structure by jointly optimizing two companion retrieval tasks: document ranking and query suggestion.
Query Suggestion with Feedback Memory Network
TLDR
Feedback Memory Network, which models user interactions with the search engine for query suggestion, provides more diverse and accurate suggestions, which is exceptionally helpful for ambiguous sessions where more information is required to infer the search intents.
RIN: Reformulation Inference Network for Context-Aware Query Suggestion
TLDR
This paper proposes Reformulation Inference Network (RIN) to learn how users reformulate queries, thereby benefiting context-aware query suggestion and demonstrates that RIN outperforms competitive baselines across various situations for both discriminative and generative tasks of context- aware query suggestion.
Using BERT and BART for Query Suggestion
TLDR
It is shown that pre-trained transformer networks exhibit a very good performance for query suggestion on a large corpus of search logs, that they are more robust to noise, and have a better understanding of complex queries.
Conversational Query Understanding Using Sequence to Sequence Modeling
TLDR
A large scale open domain dataset of conversational queries and various sequence to sequence models that are learned from this dataset are presented, showing the potential of sequence tosequence modeling for this task.
Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
TLDR
It is proved that the subsampled greedy algorithm is rate-optimal for Bernoulli bandits when k > \sqrt{T}, and achieves sublinear regret with more general distributions.
How to use expert advice
TLDR
This work analyzes algorithms that predict a binary value by combining the predictions of several prediction strategies, called `experts', and shows how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context.
On Regret with Multiple Best Arms
TLDR
This work proposes an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound, and proves a lower bound for the problem setting, which indicates that no algorithm can be optimal simultaneously over all hardness levels.
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
...
...