CQR-SQL: Conversational Question Reformulation Enhanced Context-Dependent Text-to-SQL Parsers

@article{Xiao2022CQRSQLCQ,
  title={CQR-SQL: Conversational Question Reformulation Enhanced Context-Dependent Text-to-SQL Parsers},
  author={Dongling Xiao and Linzheng Chai and Qian-Wen Zhang and Zhao Yan and Zhoujun Li and Yunbo Cao},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07686}
}
Context-dependent text-to-SQL is the task of translating multi-turn questions into database-related SQL queries. Existing methods typically focus on making full use of history context or previously predicted SQL for currently SQL parsing, while neglecting to explicitly comprehend the schema and conversational dependency, such as co-reference, ellipsis and user focus change. In this paper, we propose CQR-SQL, which uses auxiliary Conversational Question Reformulation (CQR) learning to explicitly… 

Figures and Tables from this paper

MIGA: A Unified Multi-task Generation Framework for Conversational Text-to-SQL

A two-stage unified MultI-task Generation frAmework (MIGA) that leverages PLMs' ability to tackle conversational text-to-SQL and proposes four SQL perturbations to alleviate the error propagation problem.

Controllable Data Augmentation for Context-Dependent Text-to-SQL

This paper introduces ConDA, which generates interactive questions and corresponding SQL results, and designs the SQL dialogue state to enhance the data diversity through the state transition, and presents a filter method to ensure the data quality by a grounding model.

A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability

The results demonstrate that ChatGPT has strong text-to-SQL abilities, and there is still a gap from the current state-of-the-art (SOTA) model performance, although that the experiment was conducted in a zero-shot scenario, ChatG PT's performance is still impressive.

Relevant Objectives of Developing SQL Adaptive Learning Technology

The goal of the research is to provide comprehensive analysis of the existing adaptive SQL E-Learning systems and formulate actual objectives that could be used in further researches.

Q-TOD: A Query-driven Task-oriented Dialogue System

This paper introduces a novel query-driven task-oriented dialogue system, namely Q-TOD, which outperforms strong baselines and establishes a new state-of-the-art performance on these datasets.

Decoupled Dialogue Modeling and Semantic Parsing for Multi-Turn Text-to-SQL

This paper proposes a novel decoupled multi-turn Text-to-SQL framework, where an utterance rewrite model first explicitly solves completion of dialogue context, and then a single-turntext- to-SQL parser follows to address the data sparsity problem.

RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers

This work presents a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder and achieves the new state-of-the-art performance on the Spider leaderboard.

Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering

A novel framework, ExCorD (Explicit guidance on how to resolve Conversational Dependency) is proposed to enhance the abilities of QA models in comprehending conversational context, while addressing the limitations of the existing approaches.

COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining

A self-supervised learning framework, COCO-LM, that pretrains Language Models by COrrecting and COntrasting corrupted text sequences that outperforms recent state-of-the-art pretrained models in accuracy, but also improves pretraining efficiency.

RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL

This work proposes RASAT: a Transformer seq2seq architecture augmented with relation-aware self-attention that could leverage a variety of relational structures while inheriting the pretrained parameters from the T5 model effectively.

HIE-SQL: History Information Enhanced Network for Context-Dependent Text-to-SQL Semantic Parsing

This work proposes a History Information Enhanced text-to-SQL model (HIE-SQL) to exploit context dependence information from both history utterances and the last predicted SQL query, and proposes a bimodal pre-trained model to bridge the gap between them.

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

The UnifiedSKG framework is proposed, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclusive to a single task, domain, or dataset.

PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

On the challenging Spider and CoSQL text-to-SQL translation tasks, it is shown that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.

Dynamic Hybrid Relation Exploration Network for Cross-Domain Context-Dependent Semantic Parsing

This paper presents a dynamic graph framework that is capable of effectively modelling contextual utterances, tokens, database schemas, and their complicated interaction as the conversation proceeds, and employs a dynamic memory decay mechanism that incorporates inductive bias to integrate enriched contextual relation representation.

LGESQL: Line Graph Enhanced Text-to-SQL Model with Mixed Local and Non-Local Relations

This work proposes a Line Graph Enhanced Text-to-SQL (LGESQL) model to mine the underlying relational features without constructing meta-paths, and designs an auxiliary task called graph pruning to improve the discriminative capability of the encoder.