• Corpus ID: 32481355

A Deep Semantic Natural Language Processing Platform

@inproceedings{Gardner2017ADS,
  title={A Deep Semantic Natural Language Processing Platform},
  author={Matt Gardner and Joel Grus and Mark Neumann and Oyvind Tafjord and Pradeep Dasigi and Nelson H S Liu and Matthew E. Peters and Michael Schmitz and Luke Zettlemoyer},
  year={2017}
}
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of PyTorch, allowing for dynamic computation graphs, and provides (1) a flexible data API that handles intelligent batching and padding, (2) highlevel abstractions for common operations in working with text, and (3) a modular and extensible… 

Ludwig: a type-based declarative deep learning toolbox

TLDR
Ludwig is a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code, and introduces a general modularized deep learning architecture called Encoder-Combiner-Decoder that can be instantiated to perform a vast amount of machine learning tasks.

PyText: A Seamless Path from NLP research to production

We introduce PyText - a deep learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapid experimentation and of serving models at

EXPLORING PRETRAINED MODELS FOR JOINT MORPHO-SYNTACTIC PARSING OF RUSSIAN

TLDR
A joint morpho-syntactic parser for Russian is built which is significantly faster and as accurate as a traditional pipeline of models and it is proved that character-level word embeddings can significantly improve parsing quality.

Compositional Semantic Parsing across Graphbanks

TLDR
A compositional neural semantic parser which achieves competitive accuracies across a diverse range of graphbanks for the first time, and incorporating BERT embeddings and multi-task learning improves the accuracy further.

Neighbor Contextual Information Learners for Joint Intent and Slot Prediction

TLDR
This work exploresCNN based models like Trellis and modified the architecture to make it bi-directional with fusion techniques and proposesCNN with Self Attention network calledNeighbor Contextual Information Projector usingMulti Head Attention (NCIPMA) architecture.

A Context-Aware Neural Embedding for Function-Level Vulnerability Detection

TLDR
A supervised framework leveraging pre-trained context-aware embeddings from language models (ELMo) to capture deep contextual representations is proposed, further summarized by a bidirectional long short-term memory (Bi-LSTM) layer for learning long-range code dependency.

MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

TLDR
A Learned Evaluation metric for Reading Comprehension, LERC, is trained to mimic human judgement scores, which achieves 80% accuracy and outperforms baselines by 14 to 26 absolute percentage points while leaving significant room for improvement.

Strong and Light Baseline Models for Fact-Checking Joint Inference

TLDR
This paper shows that in case of inference based on transformer models, two effective approaches use either a simple application of max pooling over the Transformer evidence vectors; or computing a weighted sum of the evidence vectors.

Knowledge Generation - Variational Bayes on Knowledge Graphs

TLDR
This thesis is a proof of concept for the potential of Variational Auto-Encoder (VAE) on representation learning of real-world Knowledge Graphs (KG) and presents a new validation method for generated triples from the FB15K-237 dataset.

Label Definitions Improve Semantic Role Labeling

TLDR
This model achieves state-of-the-art performance on the CoNLL09 dataset injected with label definitions given the predicate senses, and is even more pronounced in low-resource settings when training data is scarce.

References

SHOWING 1-10 OF 25 REFERENCES

A large annotated corpus for learning natural language inference

TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

End-to-end learning of semantic role labeling using recurrent neural networks

TLDR
This work proposes to use deep bi-directional recurrent network as an end-to-end system for SRL, which takes only original text information as input feature, without using any syntactic knowledge.

Natural Language Processing with Python

This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic

ParlAI: A Dialog Research Software Platform

TLDR
ParlAI (pronounced “par-lay”), an open-source software platform for dialog research implemented in Python, is introduced, to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning.

The Stanford CoreNLP Natural Language Processing Toolkit

TLDR
The design and use of the Stanford CoreNLP toolkit is described, an extensible pipeline that provides core natural language analysis, and it is suggested that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.

Semi-supervised sequence tagging with bidirectional language models

TLDR
A general semi-supervised approach for adding pretrained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.

Bidirectional Attention Flow for Machine Comprehension

TLDR
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization.

End-to-end Neural Coreference Resolution

TLDR
This work introduces the first end-to-end coreference resolution model, trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions.

Long Short-Term Memory-Networks for Machine Reading

TLDR
A machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention and extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell, offering a way to weakly induce relations among tokens.

SQuAD: 100,000+ Questions for Machine Comprehension of Text

TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).