• Publications
  • Influence
Gated Graph Sequence Neural Networks
TLDR
We study feature learning techniques for graph-structured inputs and propose a novel graph-based neural network model that outputs sequences. Expand
  • 1,345
  • 205
  • PDF
Learning to Represent Programs with Graphs
TLDR
We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. Expand
  • 276
  • 61
  • PDF
DeepCoder: Learning to Write Programs
TLDR
We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning. Expand
  • 285
  • 45
  • PDF
Constrained Graph Variational Autoencoders for Molecule Design
TLDR
We propose a novel probabilistic model for graph generation that builds gated graph neural networks into the encoder and decoder of a variational autoencoder. Expand
  • 154
  • 18
  • PDF
CodeSearchNet Challenge: Evaluating the State of Semantic Code Search
TLDR
We present the CodeSearchNet Challenge, which consists of 99 natural language queries with about 4k expert relevance annotations of likely results from codeSearchNet Corpus. Expand
  • 52
  • 13
  • PDF
Structured Neural Summarization
TLDR
We develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. Expand
  • 62
  • 12
  • PDF
Alternating Runtime and Size Complexity Analysis of Integer Programs
We present a modular approach to automatic complexity analysis. Based on a novel alternation between finding symbolic time bounds for program parts and using these to infer size bounds on programExpand
  • 74
  • 11
  • PDF
Analyzing Program Termination and Complexity Automatically with AProVE
TLDR
We present the tool AProVE for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and rewrite systems. Expand
  • 85
  • 9
  • PDF
Generative Code Modeling with Graphs
TLDR
Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. Expand
  • 69
  • 7
  • PDF
TerpreT: A Probabilistic Programming Language for Program Induction
TLDR
We study machine learning formulations of inductive program synthesis; given input-output examples, we try to synthesize source code that maps inputs to corresponding outputs. Expand
  • 101
  • 6
  • PDF
...
1
2
3
4
5
...