• Publications
  • Influence
Sequence-to-point learning with neural networks for nonintrusive load monitoring
TLDR
We propose a sequence-to-point learning with convolutional neural networks for energy disaggregation. Expand
  • 108
  • 20
  • PDF
Piecewise Training for Undirected Models
TLDR
We study piecewise training, an intuitively appealing procedure that separately trains disjoint pieces of a loopy graph. Expand
  • 196
  • 17
  • PDF
Probabilistic Inference over RFID Streams in Mobile Environments
TLDR
In this paper, we address the problem of translating noisy, incomplete raw streams from mobile RFID readers into clean, precise event streams with location information. Expand
  • 134
  • 10
  • PDF
Piecewise pseudolikelihood for efficient training of conditional random fields
TLDR
We introduce piecewise pseudolikelihood, which retains the computational efficiency of pseudolikedlihood but can have much better accuracy, with five to ten times less training time. Expand
  • 185
  • 9
  • PDF
Parameter-free probabilistic API mining across GitHub
TLDR
We present PAM (Probabilistic API Miner), a near parameter-free probabilistic algorithm for mining the most interesting API call patterns. Expand
  • 48
  • 8
  • PDF
Signal Aggregate Constraints in Additive Factorial HMMs, with Application to Energy Disaggregation
TLDR
We incorporate SACs into an additive factorial hidden Markov model (AFHMM) to formulate the energy disaggregation problems where only one mixture signal is assumed to be observed. Expand
  • 49
  • 8
  • PDF
Composition of Conditional Random Fields for Transfer Learning
TLDR
We perform joint decoding of separately-trained sequence models, preserving uncertainty between the tasks and allowing information from the new task to affect predictions on the old task. Expand
  • 59
  • 7
  • PDF
A Subsequence Interleaving Model for Sequential Pattern Mining
TLDR
We present a novel subsequence interleaving model based on a probabilistic model of the sequence database, which allows us to search for the most compressing set of patterns without designing a specific encoding scheme. Expand
  • 41
  • 6
  • PDF
Reducing Weight Undertraining in Structured Discriminative Learning
TLDR
We introduce several new feature bagging methods, in which separate models are trained on subsets of the original features, and combined using a mixture model or a product of experts. Expand
  • 50
  • 6
  • PDF
Quasi-Newton Methods for Markov Chain Monte Carlo
TLDR
We propose MCMC samplers that make use of quasi-Newton approximations, which approximate the Hessian of the target distribution from previous samples and gradients generated by the sampler. Expand
  • 57
  • 5
  • PDF
...
1
2
3
4
5
...