Efficient Bayesian active learning and matrix modelling

@inproceedings{Houlsby2014EfficientBA,
  title={Efficient Bayesian active learning and matrix modelling},
  author={N. Houlsby},
  year={2014}
}
With the advent of the Internet and growth of storage capabilities, large collections of unlabelled data are now available. However, collecting supervised labels can be costly. Active learning addresses this by selecting, sequentially, only the most useful data in light of the information collected so far. The online nature of such algorithms often necessitates efficient computations. Thus, we present a framework for information theoretic Bayesian active learning, named Bayesian Active Learning… Expand
Discriminative learning from partially annotated examples
TLDR
This thesis proves the existence of a convex classification calibrated surrogate loss for learning from partially annotated examples, and defines the concept of a surrogate classification calibrated partial loss, the minimization of which guarantees that learning is statistical consistent under fairly general conditions on the data generating process. Expand
Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection
TLDR
This article proposes an uncertainty estimation framework, called MC‐DropWeights, to approximate Bayesian inference in DL by imposing a Bernoulli distribution on the incoming or outgoing weights of the model, including neurones, and demonstrates that this method produces an equally good or better result in both quantified uncertainty estimation and quality of uncertainty estimates than approximateBayesian neural networks in practice. Expand
Estimating Uncertainty and Interpretability in Deep Learning for Coronavirus (COVID-19) Detection
TLDR
This paper investigates how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and shows that the uncertainty in prediction is highly correlates with accuracy of prediction. Expand
Computer Vision for COVID-19 Control: A Survey
TLDR
This survey paper is intended to provide a preliminary review of the available literature on the computer vision efforts against COVID-19 pandemic, and collect information about available research resources and an indication of future research directions. Expand
COVID-19 Control by Computer Vision Approaches: A Survey
TLDR
A preliminary review of the literature on research community efforts against COVID-19 pandemic is presented to make it possible for computer vision researchers to find existing and future research directions. Expand

References

SHOWING 1-10 OF 240 REFERENCES
Efficient active learning with generalized linear models
TLDR
An efficient algorithm for choosing the optimal query when the output labels are related to the inputs by a generalized linear model (GLM) based on a Laplace approximation of the posterior distribution of the GLM’s parameters. Expand
SPARSE GAUSSIAN PROCESSES FOR LARGE-SCALE MACHINE LEARNING
TLDR
This thesis presents several novel sparse GP models that compare favorably with SPGP, both in terms of predictive performance and error bar quality, and provides two broad classes of models: Marginalized Networks (MNs) and Inter-Domain GPs (IDGPs). Expand
Variational Bayesian learning of directed graphical models with hidden variables
A key problem in statistics and machine learning is inferring suitable structure of a model given some observed data. A Bayesian approach to model comparison makes use of the marginal likelihood ofExpand
Fast Sparse Gaussian Process Methods: The Informative Vector Machine
TLDR
A framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, which allows for Bayesian model selection and is less complex in implementation is presented. Expand
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms. Expand
Inferring Parameters and Structure of Latent Variable Models by Variational Bayes
  • H. Attias
  • Computer Science, Mathematics
  • UAI
  • 1999
TLDR
The Variational Bayes framework is presented, which approximates full posterior distributions over model parameters and structures, as well as latent variables, in an analytical manner without resorting to sampling methods, and can be applied to a large class of models in several domains. Expand
Online Variational Inference for the Hierarchical Dirichlet Process
TLDR
This work proposes an online variational inference algorithm for the HDP, an algorithm that is easily applicable to massive and streaming data, and lets us analyze much larger data sets. Expand
Active Learning
The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may poseExpand
Cold-start Active Learning with Robust Ordinal Matrix Factorization
TLDR
This work presents a new matrix factorization model for rating data and a corresponding active learning strategy to address the cold-start problem and presents a computationally efficient framework for Bayesian active learning with this type of complex probabilistic model. Expand
Probabilistic Matrix Factorization with Non-random Missing Data
TLDR
A probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR) to obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process. Expand
...
1
2
3
4
5
...