NLProveNAns: Natural Language Provenance for Non-Answers

@article{Deutch2018NLProveNAnsNL,
  title={NLProveNAns: Natural Language Provenance for Non-Answers},
  author={Daniel Deutch and Nave Frost and Amir Gilad and Tomer Haimovich},
  journal={Proc. VLDB Endow.},
  year={2018},
  volume={11},
  pages={1986-1989}
}
Natural language (NL) interfaces to databases allow users without technical background to query the database and get the results. Users of such systems may be surprised by the absence of certain expected results. To this end, we propose to demonstrate NLProveNAns, a system that allows non-expert users to view explanations for non-answers of interest. The explanations are shown in an intuitive manner, by highlighting parts of the original NL query that are intuitively “responsible” for the… 

Figures from this paper

Explaining Natural Language query results
TLDR
This work develops a novel method for transforming provenance information to NL, by leveraging the original NL query structure, and presents two solutions for its effective presentation as NL text: one based on provenance factorization, with novel desiderata relevant to the NL case and one that is based on summarization.
Explaining Missing Query Results in Natural Language
TLDR
This paper proposes a novel approach to “marry" NLIDBs with an existing model for explaining missing query results by pinpointing the last query operator that is “responsible" for the missing result.
To Not Miss the Forest for the Trees - A Holistic Approach for Explaining Missing Answers over Nested Data
TLDR
This work presents a novel approach to produce query-based explanations for missing answers that is the first to support nested data and to consider operators that modify the schema and structure of the data as potential causes of missing answers.
Debugging Missing Answers for Spark Queries over Nested Data with Breadcrumb
TLDR
Breadcrumb is a system that aids developers in debugging queries through query-based explanations for missing answers, and is the first that scales to big data dimensions and is capable of finding explanations for common errors in queries over nested and de-normalized data.
Explaining Results of Data-Driven Applications
  • Nave Frost
  • Computer Science
    2019 IEEE 35th International Conference on Data Engineering (ICDE)
  • 2019
TLDR
This paper demonstrates approaches for interpretability in two applications: Natural Language Queries, and Machine Learning Classifiers, followed by a discussion of open problems and future work.

References

SHOWING 1-6 OF 6 REFERENCES
NLProv: Natural Language Provenance
TLDR
This work develops a novel method for transforming provenance information to NL, by leveraging the original NL question structure, and presents two solutions for its effective presentation as NL text: one based on provenance factorization with novel desiderata relevant to the NL case, and one that is based on summarization.
Selective Provenance for Datalog Programs Using Top-K Queries
TLDR
A novel top-k query language for querying datalog provenance, supporting selection criteria based on tree patterns and ranking based on the rules and database facts used in derivation, and an efficient novel algorithm based on instrumenting the datalog program so that it generates only relevant provenance.
Constructing an Interactive Natural Language Interface for Relational Databases
TLDR
The architecture of an interactive natural language query interface for relational databases is described, able to correctly interpret complex natural language queries, in a generic manner across a range of domains, and is good enough to be usable in practice.
Efficient Computation of Polynomial Explanations of Why-Not Questions
TLDR
This paper focuses on processing Why-Not questions in a query-based approach that identifies the culprit query components and presents an algorithm to efficiently compute the polynomial for a given Why- not question.
EFQ: Why-Not Answer Polynomials in Action
TLDR
The EFQ platform demonstrated here has been designed in this context to efficiently leverage Why-Not Answers polynomials, a novel approach that provides the user with complete explanations to Why- not questions and allows for automatic, relevant query refinements.
Why not? In SIGMOD
  • pages 523–534
  • 2009