Learning Joint Semantic Parsers from Disjoint Data

@article{Peng2018LearningJS,
  title={Learning Joint Semantic Parsers from Disjoint Data},
  author={Hao Peng and Sam Thomson and Swabha Swayamdipta and Noah A. Smith},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.05990}
}
We present a new approach to learning a semantic parser from multiple datasets, even when the target semantic formalisms are drastically different and the underlying corpora do not overlap. We handle such “disjoint” data by treating annotations for unobserved formalisms as latent structured variables. Building on state-of-the-art baselines, we show improvements both in frame-semantic parsing and semantic dependency parsing by modeling them jointly. 

Figures and Tables from this paper

Compositional Semantic Parsing across Graphbanks

A compositional neural semantic parser which achieves competitive accuracies across a diverse range of graphbanks for the first time, and incorporating BERT embeddings and multi-task learning improves the accuracy further.

Multitask Learning for Cross-Lingual Transfer of Broad-coverage Semantic Dependencies

A method for developing broad-coverage semantic dependency parsers for languages for which no semantically annotated resource is available and uses syntactic parsing as the auxiliary task in a multitask setup.

Syntactic Sca ff olds for Semantic Structures

This work introduces the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks through a multitask objective, and improves over strong baselines on PropBank semantics, frame semantics, and coreference resolution.

Syntactic Scaffolds for Semantic Structures

This work introduces the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks through a multitask objective, and improves over strong baselines on PropBank semantics, frame semantics, and coreference resolution.

Mutlitask Learning for Cross-Lingual Transfer of Semantic Dependencies

A method for developing broad-coverage semantic dependency parsers for languages for which no semantically annotated resource is available is described and it is observed that syntactic and semantic dependency direction match is an important factor in improving the results.

Open-Domain Frame Semantic Parsing Using Transformers

It is shown that a purely generative encoder-decoder architecture handily beats the previous state of the art in FrameNet 1.7 parsing, and that a mixed decoding multi-task approach achieves even better performance.

Multitask Parsing Across Semantic Representations

It is shown that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings, and uniform transition-based system and learning architecture for all parsing tasks are experiment on.

Broad-Coverage Semantic Parsing as Transduction

An attention-based neural transducer that incrementally builds meaning representation via a sequence of semantic relations is proposed that improves the state of the art on both AMR and UCCA, and is competitive with the state-of-the- art on SDP.

Encoding Syntactic Constituency Paths for Frame-Semantic Parsing with Graph Convolutional Networks

This work uses a Graph Convolutional Network to learn specific representations of constituents, such that each constituent is profiled as the production grammar rule it corresponds to in Frame-semantic parsing sub-tasks.

Multi-Task Semantic Dependency Parsing with Policy Gradient for Learning Easy-First Strategies

This work proposes a new iterative predicate selection (IPS) algorithm that combines the graph-based and transition-based parsing approaches in order to handle multiple semantic head words in Semantic Dependency Parsing.
...

References

SHOWING 1-10 OF 57 REFERENCES

Frame-Semantic Role Labeling with Heterogeneous Annotations

This work augments an existing model with features derived from Frame net and PropBank and with partially annotated exemplars from FrameNet with the aim of identifying and labeling the semantic arguments of a predicate that evokes a FrameNet frame.

Using Semantic Roles to Improve Question Answering

This work introduces a general framework for answer extraction which exploits semantic role annotations in the FrameNet paradigm and views semantic role assignment as an optimization problem in a bipartite graph and answer extraction as an instance of graph matching.

Semantic Role Labeling with Neural Network Factors

A new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate, which is based on a neural network designed for the SRL task.

Deep Multitask Learning for Semantic Dependency Parsing

We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a

Probabilistic Frame-Semantic Parsing

An implemented parser that transforms an English sentence into a frame-semantic representation and uses two feature-based, discriminative probabilistic models to permit disambiguation of new predicate words is described.

The CoNLL 2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies

This shared task not only unifies the shared tasks of the previous four years under a unique dependency-based formalism, but also extends them significantly: this year's syntactic dependencies include more information such as named-entity boundaries; the semantic dependencies model roles of both verbal and nominal predicates.

A Joint Sequential and Relational Model for Frame-Semantic Parsing

This work introduces a new method for frame-semantic parsing that significantly outperforms existing neural and non-neural approaches, achieving a 5.7 F1 gain over the current state of the art, for full frame structure extraction.

Broad-Coverage Semantic Dependency Parsing

This task description position the problem in comparison to other sub-tasks in computational language analysis, introduce the semantic dependency target representations used, reflect on high-level commonalities and differences between these representations, and summarize the task setup, participating systems, and main results.

LTH: Semantic Structure Extraction using Nonprojective Dependency Trees

A fully automatic method to add words to the FrameNet lexical database, which gives an improvement in the recall of frame detection.

Priberam: A Turbo Semantic Parser with Second Order Features

A feature-rich linear model, including scores for first and second-order dependencies (arcs, siblings, grandparents and co-parents), is employed for SemEval-2014 shared task on BroadCoverage Semantic Dependency Parsing.
...