Google, University of Edinburgh
Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
An Introduction to Conditional Random Fields for Relational Learning
A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification, and this is the approach taken by conditional random fields.
An Introduction to Conditional Random Fields
This survey describes conditional random fields, a popular probabilistic method for structured prediction, and describes methods for inference and parameter estimation for CRFs, including practical issues for implementing large-scale CRFs.
Introduction to Statistical Relational Learning
Autoencoding Variational Inference For Topic Models
This work presents what is to their knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which it is called Autoencoded Variational Inference For Topic Model (AVITM).
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
- Akash Srivastava, L. Valkov, Chris Russell, Michael U Gutmann, Charles Sutton
- Computer ScienceNIPS
- 22 May 2017
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.
Dynamic conditional random fields: factorized probabilistic models for labeling and segmenting sequence data
On a natural-language chunking task, it is shown that a DCRF performs better than a series of linear-chain CRFs, achieving comparable performance using only half the training data.
A Convolutional Attention Network for Extreme Summarization of Source Code
An attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way to solve the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries is introduced.
Sequence-to-point learning with neural networks for nonintrusive load monitoring
- Chaoyun Zhang, Mingjun Zhong, Zong‐Hui Wang, N. Goddard, Charles Sutton
- Computer ScienceAAAI
- 29 December 2016
This paper proposes sequence-to-point learning, where the input is a window of the mains and the output is a single point of the target appliance, and uses convolutional neural networks to train the model.
Suggesting accurate method and class names
- Miltiadis Allamanis, Earl T. Barr, C. Bird, Charles Sutton
- Computer ScienceESEC/SIGSOFT FSE
- 30 August 2015
A neural probabilistic language model for source code that is specifically designed for the method naming problem is introduced, and a variant of the model is introduced that is, to the knowledge, the first that can propose neologisms, names that have not appeared in the training corpus.
Mining source code repositories at massive scale using language modeling
- Miltiadis Allamanis, Charles Sutton
- Computer Science10th Working Conference on Mining Software…
- 18 May 2013
This paper builds the first giga-token probabilistic language model of source code, based on 352 million lines of Java, and proposes new metrics that measure the complexity of a code module and the topical centrality of a module to a software project.