• Publications
  • Influence
Neural Architectures for Named Entity Recognition
Comunicacio presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.
Deep Complex Networks
TLDR
This work relies on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and uses them in experiments with end-to-end training schemes and demonstrates that such complex- valued models are competitive with their real-valued counterparts. Expand
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
TLDR
This work presents a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model and demonstrates that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. Expand
Multiple-Attribute Text Rewriting
TLDR
This paper proposes a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation, and demonstrates that the fully entangled model produces better generations. Expand
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
TLDR
A simple extractive step is performed before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with Generating a summary. Expand
Machine Comprehension by Text-to-Text Neural Question Generation
TLDR
A recurrent neural model is proposed that generates natural-language questions from documents, conditioned on answers, and fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Expand
Multiple-Attribute Text Style Transfer
TLDR
It is shown that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning disentangled representations, and a new model is proposed where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Expand
A Deep Reinforcement Learning Chatbot
TLDR
MILA's MILABOT is capable of conversing with humans on popular small talk topics through both speech and text and consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. Expand
Adversarial Generation of Natural Language
TLDR
A simple baseline is introduced that addresses the discrete output space problem without relying on gradient estimators and is able to achieve state-of-the-art results on a Chinese poem generation dataset. Expand
Neural Models for Key Phrase Extraction and Question Generation
We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick whenExpand
...
1
2
...