• Publications
  • Influence
Summarizing Source Code using a Neural Attention Model
TLDR
This paper presents the first completely datadriven approach for generating high level summaries of source code, which uses Long Short Term Memory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries.
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
TLDR
This work presents a novel training procedure that can lift the limitation of the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs, and presents strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
On social networks and collaborative recommendation
TLDR
This work created a collaborative recommendation system that effectively adapts to the personal information needs of each user, and adopts the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks.
Learning a Neural Semantic Parser from User Feedback
We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal
Mapping Language to Code in Programmatic Context
TLDR
This work introduces CONCODE, a new large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develops a new encoder-decoder architecture that models the interaction between the method documentation and the class environment.
The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task
TLDR
It is shown how variants of the same writing task can lead to measurable differences in writing style, and a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases.
SEQˆ3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression
TLDR
The proposed sequence-to-sequence- to-sequence autoencoder (SEQˆ3), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables, achieves promising results in unsupervised sentence compression on benchmark datasets.
Automatically Detecting and Attributing Indirect Quotations
TLDR
This work presents the first large-scale experiments in indirect and mixed quotation extraction and attribution, and shows that direct quotation attribu- tion methods can be successfully applied to in- direct and Mixed quotation attribution.
Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity
TLDR
A measure of coherence is introduced as the GloVe embedding similarity between the dialogue context and the generated response to improve coherence and diversity in encoder-decoder models for open-domain conversational agents.
...
1
2
3
4
5
...