Corpus ID: 229331793

High Dimensional Level Set Estimation with Bayesian Neural Network

@inproceedings{Ha2021HighDL,
  title={High Dimensional Level Set Estimation with Bayesian Neural Network},
  author={Huong Ha and Sunil Gupta and Santu Rana and Svetha Venkatesh},
  booktitle={AAAI},
  year={2021}
}
Level Set Estimation (LSE) is an important problem with applications in various fields such as material design, biotechnology, machine operational testing, etc. Existing techniques suffer from the scalability issue, that is, these methods do not work well with high dimensional inputs. This paper proposes novel methods to solve the high dimensional LSE problems using Bayesian Neural Networks. In particular, we consider two types of LSE problems: (1) explicit LSE problem where the threshold level… Expand

Figures from this paper

References

SHOWING 1-10 OF 29 REFERENCES
Active Learning for Level Set Estimation
TLDR
This work proposes LSE, an algorithm that guides both sampling and classification based on GP-derived confidence bounds, and extends LSE and its theory to two more natural settings: where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Expand
Multiscale Gaussian Process Level Set Estimation
TLDR
This approach improves upon the existing technique of bounding the information gain with maximum information gain and results in the algorithm having a low complexity implementation whose computational cost is significantly smaller than the existing algorithms for higher dimensional search space $\X$. Expand
Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation
TLDR
A new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization and level-set estimation with Gaussian processes in a unified fashion, which is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms. Expand
Bayesian Experimental Design for Finding Reliable Level Set Under Input Uncertainty
TLDR
An active learning method is proposed to solve the IU-rLSE problem efficiently, theoretically analyze its accuracy and convergence, and illustrate its empirical performance through numerical experiments on artificial and real data. Expand
Active learning for level set estimation under cost-dependent input uncertainty
TLDR
This paper proposes a new algorithm for level set estimation (LSE) under cost-dependent input uncertainty with theoretical convergence guarantee and demonstrates the effectiveness of the proposed algorithm by applying it to synthetic and real datasets. Expand
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
A Practical Bayesian Framework for Backpropagation Networks
  • D. Mackay
  • Mathematics, Computer Science
  • Neural Computation
  • 1992
TLDR
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks that automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. Expand
Robust Super-Level Set Estimation using Gaussian Processes
TLDR
This paper focuses on the problem of determining as large a region as possible where a function exceeds a given threshold with high probability, and proposes maximizing the expected volume of the domain identified as above the threshold as predicted by a Gaussian process, robustified by a variance term. Expand
DeepPerf: Performance Prediction for Configurable Software with Deep Sparse Neural Network
  • Huong Ha, Hongyu Zhang
  • Computer Science
  • 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
  • 2019
TLDR
This paper proposes a novel approach to model highly configurable software system using a deep feedforward neural network (FNN) combined with a sparsity regularization technique, e.g. the L1 regularization, which can predict performance values of highlyconfigurable software systems with binary and/or numeric configuration options at much higher prediction accuracy than the state-of-the art approaches. Expand
...
1
2
3
...