• Publications
  • Influence
Church: a language for generative models
TLDR
We introduce Church, a universal language for describing stochastic generative processes and conditional queries over them. Expand
  • 662
  • 71
  • PDF
Training generative neural networks via Maximum Mean Discrepancy optimization
TLDR
We propose an approximation to adversarial learning that replaces the adversary with a closed-form nonparametric twosample test statistic based on the Maximum Mean Discrepancy. Expand
  • 303
  • 48
  • PDF
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
TLDR
We show that it is possible to compute nonvacuous numerical bounds on generalization error of deep stochastic neural networks with millions of parameters, despite the training data sets being one or more orders of magnitude smaller than the number of parameters. Expand
  • 286
  • 27
  • PDF
The Mondrian Process
TLDR
We describe a novel class of distributions, called Mondrian processes, which can be interpreted as probability distributions over kd-tree data structures. Expand
  • 120
  • 21
  • PDF
Enhancing Server Availability and Security Through Failure-Oblivious Computing
TLDR
We present a new technique, failure-oblivious computing, that enables servers to execute through memory errors without memory corruption. Expand
  • 360
  • 19
  • PDF
Mondrian Forests: Efficient Online Random Forests
TLDR
We introduce Mondrian forests, a new random forest variant which can be trained incrementally in an efficient manner. Expand
  • 141
  • 18
  • PDF
A study of the effect of JPG compression on adversarial images
TLDR
Neural network image classifiers are known to be vulnerable to adversarial images, i.e., inputs to the network that have undergone imperceptible perturbations specifically optimized to cause the neural network to strongly misclassify. Expand
  • 185
  • 16
  • PDF
Neural Network Matrix Factorization
TLDR
We replace the inner product by a multi-layer feed-forward neural network and learn by alternating between optimizing the network for fixed latent features, and optimizing the latent features for a fixed network. Expand
  • 85
  • 14
  • PDF
Random function priors for exchangeable arrays with applications to graphs and relational data
TLDR
We obtain a flexible yet simple Bayesian nonparametric model by placing a Gaussian process prior on the parameter function which constitutes the natural model parameter in a Bayesian model. Expand
  • 101
  • 13
  • PDF
Stabilizing the Lottery Ticket Hypothesis
TLDR
Pruning is a well-established technique for removing unnecessary structure from neural networks after training to improve the performance of inference. Expand
  • 57
  • 12
  • PDF