Match-Prompt: Improving Multi-task Generalization Ability for Neural Text Matching via Prompt Learning

@article{Xu2022MatchPromptIM,
  title={Match-Prompt: Improving Multi-task Generalization Ability for Neural Text Matching via Prompt Learning},
  author={Shicheng Xu and Liang Pang and Huawei Shen and Xueqi Cheng},
  journal={Proceedings of the 31st ACM International Conference on Information \& Knowledge Management},
  year={2022}
}
  • Shicheng XuLiang Pang Xueqi Cheng
  • Published 6 April 2022
  • Computer Science
  • Proceedings of the 31st ACM International Conference on Information & Knowledge Management
Text matching is a fundamental technique in both information retrieval and natural language processing. Text matching tasks share the same paradigm that determines the relationship between two given texts. The relationships vary from task to task, e.g. relevance in document retrieval, semantic alignment in paraphrase identification and answerable judgment in question answering. However, the essential signals for text matching remain in a finite scope, i.e. exact matching, semantic matching, and… 

Figures and Tables from this paper

NIR-Prompt: A Multi-task Generalized Neural Information Retrieval Training Framework

Experiments show that NIR-Prompt can improve the generalization of PLMs in NIR for both retrieval and reranking stages compared with baselines and under in- domain multi-task, out-of-domain multi- task, and new task adaptation settings.

References

SHOWING 1-10 OF 71 REFERENCES

Match-Ignition: Plugging PageRank into Transformer for Long-form Text Matching

The main idea is to plug the well-known PageRank algorithm into the Transformer, to identify and filter both sentence and word level noisy information in the matching process, and to show that Match-Ignition efficiently captures important sentences and words, to facilitate the long-form text matching process.

A Deep Architecture for Matching Short Texts

This paper proposes a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains and applies this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question.

A Deep Relevance Matching Model for Ad-hoc Retrieval

A novel deep relevance matching model (DRMM) for ad-hoc retrieval that employs a joint deep architecture at the query term level for relevance matching and can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models.

Multi-Task Retrieval for Knowledge-Intensive Tasks

This work proposes a multi-task trained model for neural retrieval that not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant.

Simple and Effective Text Matching with Richer Alignment Features

A fast and strong neural approach for general purpose text matching applications and proposes to keep three key features available for inter-sequence alignment: original point-wise features, previous aligned features, and contextual features while simplifying all the remaining components.

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

The basics of this promising paradigm in natural language processing are introduced, a unified set of mathematical notations that can cover a wide variety of existing work are described, and existing work is organized along several dimensions.

Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference

This work introduces Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Convolutional Neural Network Architectures for Matching Natural Language Sentences

Convolutional neural network models for matching two sentences are proposed, by adapting the convolutional strategy in vision and speech and nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling.

Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN

It is shown that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence, and there exists a clear interpretation for Match- SRNN.
...