• Publications
  • Influence
Gated Graph Sequence Neural Networks
TLDR
This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures. Expand
DeepCoder: Learning to Write Programs
TLDR
The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Expand
A* Sampling
TLDR
This work shows how sampling from a continuous distribution can be converted into an optimization problem over continuous space and presents a new construction of the Gumbel process and A* Sampling, a practical generic sampling algorithm that searches for the maximum of a Gumbels process using A* search. Expand
HOP-MAP: Efficient Message Passing with High Order Potentials
TLDR
This work introduces two new classes of high order potentials, including composite HOPs that allow us to exibly combine tractable Hops using simple logical switching rules, and presents ecient message update algorithms for the newHOPs, and improves upon the eciency of message updates for a general class of existing HOPS. Expand
Structured Generative Models of Natural Source Code
TLDR
A family of generative models for NSC that have three key properties: first, they incorporate both sequential and hierarchical structure, second, they learn a distributed representation of source code elements, and third, they integrate closely with a compiler. Expand
Bimodal Modelling of Source Code and Natural Language
TLDR
The aim is to bring together recent work on statistical modelling of source code and work on bimodal models of images and natural language to build probabilistic models that jointly model short natural language utterances and source code snippets. Expand
Learning Articulated Structure and Motion
TLDR
This work model the structure of one or more articulated objects, given a time series of two-dimensional feature positions, in terms of “stick figure” objects, under the assumption that the relative joint angles between sticks can change over time, but their lengths and connectivities are fixed. Expand
Differentiable Programs with Neural Libraries
TLDR
A framework for combining differentiable programming languages with neural networks that creates end-to-end trainable systems that learn to write interpretable algorithms with perceptual components and explores the benefits of inductive biases for strong generalization and modularity. Expand
Randomized Optimum Models for Structured Prediction
TLDR
This work explores a broader class of models, called Randomized Optimum models (RandOMs), which include Perturb-and-MAP models, and develops likelihood-based learning algorithms for RandOMs, which, empirical results indicate, can produce better models than PM. Expand
TerpreT: A Probabilistic Programming Language for Program Induction
TLDR
The aims are to develop new machine learning approaches based on neural networks and graphical models, and to understand the capabilities of machine learning techniques relative to traditional alternatives, such as those based on constraint solving from the programming languages community. Expand
...
1
2
3
4
5
...