# Uncertainty Propagation in Deep Neural Network Using Active Subspace

@article{Ji2019UncertaintyPI, title={Uncertainty Propagation in Deep Neural Network Using Active Subspace}, author={Weiqi Ji and Zhuyin Ren and Chung K. Law}, journal={ArXiv}, year={2019}, volume={abs/1903.03989} }

The inputs of deep neural network (DNN) from real-world data usually come with uncertainties. Yet, it is challenging to propagate the uncertainty in the input features to the DNN predictions at a low computational cost. This work employs a gradient-based subspace method and response surface technique to accelerate the uncertainty propagation in DNN. Specifically, the active subspace method is employed to identify the most important subspace in the input features using the gradient of the DNN… Expand

#### One Citation

Potential, challenges and future directions for deep learning in prognostics and health management applications

- Computer Science, Engineering
- Eng. Appl. Artif. Intell.
- 2020

A thorough evaluation of the current developments, drivers, challenges, potential solutions and future research needs in the field of deep learning applied to Prognostics and Health Management applications is provided. Expand

#### References

SHOWING 1-10 OF 40 REFERENCES

Analytic Expressions for Probabilistic Moments of PL-DNN with Gaussian Input

- Computer Science
- 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

This paper derives exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to general Gaussian input and shows how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Expand

Explaining and Harnessing Adversarial Examples

- Computer Science, Mathematics
- ICLR
- 2015

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand

Uncertainty propagation through deep neural networks

- Computer Science
- INTERSPEECH
- 2015

The propagation of observation uncertainties through the layers of a DNN-based acoustic model is studied and the expected value of the acoustic score distribution is used for decoding, which is shown to further improve the ASR accuracy on the CHiME database, relative to a highly optimized DNN baseline. Expand

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

- Mathematics, Computer Science
- ICML
- 2016

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand

Densely Connected Convolutional Networks

- Computer Science
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017

The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Expand

Intriguing properties of neural networks

- Computer Science
- ICLR
- 2014

It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand

Deep Residual Learning for Image Recognition

- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand

Learning Spatiotemporal Features with 3D Convolutional Networks

- Computer Science
- 2015 IEEE International Conference on Computer Vision (ICCV)
- 2015

The learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. Expand

ImageNet classification with deep convolutional neural networks

- Computer Science
- Commun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand

Shared low-dimensional subspaces for propagating kinetic uncertainty to multiple outputs

- Computer Science
- 2018

A new method is introduced that can simultaneously approximate the marginal probability density functions of multiple outputs using a single low-dimensional shared subspace that can accurately reproduce the probability of ignition failure and the probability density of ignition crank angle conditioned on successful ignition. Expand