Semi-supervised deep learning for high-dimensional uncertainty quantification

@article{Wang2020SemisupervisedDL,
  title={Semi-supervised deep learning for high-dimensional uncertainty quantification},
  author={Zequn Wang and Mingyang Li},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.01010}
}
Conventional uncertainty quantification methods usually lacks the capability of dealing with high-dimensional problems due to the curse of dimensionality. This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis. An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space, which contains a distinguishable failure surface. Then a deep feedforward neural network (DFN) is utilized to learn the mapping… 

References

SHOWING 1-10 OF 41 REFERENCES
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification
TLDR
This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small.
Learning Deep CNN Denoiser Prior for Image Restoration
TLDR
Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.
High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning
TLDR
A hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN, which delivers a comparable accuracy with a deep autoencoder and is scalable and computationally efficient.
Rare-event probability estimation with adaptive support vector regression surrogates
TLDR
The key idea is to iteratively construct surrogates which quickly explore the safe domain and focus on the limit-state surface in its final stage by minimizing an estimation of the leave-one-out error with the cross-entropy method.
Accelerated subset simulation with neural networks for reliability analysis
TLDR
It is demonstrated that the training of a sufficiently accurate NN meta-model in the context of SS simulation leads to more robust estimations of the probability of failure both in terms of mean and variance of the estimator.
AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation
TLDR
An iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way and is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function.
Alternative Kriging-HDMR optimization method with expected improvement sampling strategy
Purpose The purpose of study is to overcome the error estimation of standard deviation derived from Expected improvement (EI) criterion. Compared with other popular methods, a quantitative
Dynamic reliability analysis using the extended support vector regression (X-SVR)
TLDR
A new machine learning based metamodel, namely the extended support vector regression (X-SVR), is proposed for the reliability analysis of dynamic systems via utilizing the first-passage theory to approximate the relationship between the system inputs and outputs.
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks
TLDR
A gradient descent algorithm is derived that allows to learn high dimensional linear filters that operate in sparsely populated feature spaces and builds on the permutohedral lattice construction for efficient filtering.
LIF: A new Kriging based learning function and its application to structural reliability analysis
TLDR
Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension.
...
1
2
3
4
5
...