• Publications
  • Influence
Do Deep Generative Models Know What They Don't Know?
TLDR
The density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses from those of house numbers, and such behavior persists even when the flows are restricted to constant-volume transformations.
Normalizing Flows for Probabilistic Modeling and Inference
TLDR
This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
Improving Document Ranking with Dual Word Embeddings
TLDR
This paper investigates the popular neural word embedding method Word2vec as a source of evidence in document ranking and proposes the proposed Dual Embedding Space Model (DESM), which provides evidence that a document is about a query term.
A Dual Embedding Space Model for Document Ranking
TLDR
The proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches, and shows that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF.
Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality
TLDR
This work proposes a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods to determine whether or not inputs reside in the typical set.
Approximate Inference for Deep Latent Gaussian Mixtures
TLDR
The authors' deep Latent Gaussian mixture model (DLGMM) generalizes previous work such as Factor Mixture Analysis and Deep Gaussian Mixtures to arbitrary differentiable inter-layer transformations and describes learning and inference for not only the traditional mixture model but also Dirichlet Process mixtures.
Hybrid Models with Deep and Invertible Features
TLDR
The hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models, and the generative component remains a good model of the input features despite the hybrid optimization objective.
Deep Generative Models with Stick-Breaking Priors
TLDR
It is experimentally demonstrated that DGMs with Dirichlet process priors learn highly discriminative latent representations that are well suited for semi-supervised settings and often outperform the popular Gaussian alternative.
Bayesian Batch Active Learning as Sparse Subset Approximation
TLDR
A novel Bayesian batch active learning approach that mitigates standard greedy procedures for large-scale regression and classification tasks and derive interpretable closed-form solutions akin to existing active learning procedures for linear models, and generalize to arbitrary models using random projections.
On Priors for Bayesian Neural Networks
TLDR
This dissertation aims to help the reader navigate the landscape of neural network priors, surveys the existing work on priors for neural networks, isolating some key themes such as the move towards heavy-tailed priors and describes how to give Bayesian neural networks an adaptive width by placing stick-breaking priors on their latent representation.
...
1
2
3
4
5
...