Corpus ID: 73729142

Uncertainty Propagation in Deep Neural Network Using Active Subspace

@article{Ji2019UncertaintyPI,
  title={Uncertainty Propagation in Deep Neural Network Using Active Subspace},
  author={Weiqi Ji and Zhuyin Ren and Chung K. Law},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.03989}
}
The inputs of deep neural network (DNN) from real-world data usually come with uncertainties. Yet, it is challenging to propagate the uncertainty in the input features to the DNN predictions at a low computational cost. This work employs a gradient-based subspace method and response surface technique to accelerate the uncertainty propagation in DNN. Specifically, the active subspace method is employed to identify the most important subspace in the input features using the gradient of the DNN… Expand
1 Citations
Potential, challenges and future directions for deep learning in prognostics and health management applications
TLDR
A thorough evaluation of the current developments, drivers, challenges, potential solutions and future research needs in the field of deep learning applied to Prognostics and Health Management applications is provided. Expand

References

SHOWING 1-10 OF 40 REFERENCES
Analytic Expressions for Probabilistic Moments of PL-DNN with Gaussian Input
TLDR
This paper derives exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to general Gaussian input and shows how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Uncertainty propagation through deep neural networks
TLDR
The propagation of observation uncertainties through the layers of a DNN-based acoustic model is studied and the expected value of the acoustic score distribution is used for decoding, which is shown to further improve the ASR accuracy on the CHiME database, relative to a highly optimized DNN baseline. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Expand
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Learning Spatiotemporal Features with 3D Convolutional Networks
TLDR
The learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Shared low-dimensional subspaces for propagating kinetic uncertainty to multiple outputs
TLDR
A new method is introduced that can simultaneously approximate the marginal probability density functions of multiple outputs using a single low-dimensional shared subspace that can accurately reproduce the probability of ignition failure and the probability density of ignition crank angle conditioned on successful ignition. Expand
...
1
2
3
4
...