• Publications
  • Influence
Spectral Networks and Locally Connected Networks on Graphs
TLDR
This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.
End-To-End Memory Networks
TLDR
A neural network with a recurrent attention model over a possibly large external memory that is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings.
Personalizing Dialogue Agents: I have a dog, do you have pets too?
TLDR
This work collects data and train models tocondition on their given profile information; and information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction.
Geometric Deep Learning: Going beyond Euclidean data
TLDR
Deep neural networks are used for solving a broad range of problems from computer vision, natural-language processing, and audio analysis where the invariances of these structures are built into networks used to model them.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Incremental gradient on the Grassmannian for online foreground and background separation in subsampled video
TLDR
This work presents GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data, and considers the specific application of background and foreground separation in video.
Learning Multiagent Communication with Backpropagation
TLDR
A simple neural model is explored, called CommNet, that uses continuous communication for fully cooperative tasks and the ability of the agents to learn to communicate amongst themselves is demonstrated, yielding improved performance over non-communicative agents and baselines.
A Randomized Algorithm for Principal Component Analysis
TLDR
This work describes an efficient algorithm for the low-rank approximation of matrices that produces accuracy that is very close to the best possible accuracy, for matrices of arbitrary sizes.
Tracking the World State with Recurrent Entity Networks
TLDR
The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting, and can generalize past its training horizon.
Dialogue Natural Language Inference
TLDR
This paper proposes a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluates the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialoguemodel’s consistency.
...
1
2
3
4
5
...