Corpus ID: 211987285

Transformation Importance with Applications to Cosmology

@article{Singh2020TransformationIW,
  title={Transformation Importance with Applications to Cosmology},
  author={Chandan Singh and Wooseok Ha and F. Lanusse and Vanessa Boehm and Jia Liu and Bin Yu},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.01926}
}
Machine learning lies at the heart of new possibilities for scientific discovery, knowledge generation, and artificial intelligence. Its potential benefits to these fields requires going beyond predictive accuracy and focusing on interpretability. In particular, many scientific problems require interpretations in a domain-specific interpretable feature space (e.g. the frequency domain) whereas attributions to the raw features (e.g. the pixel space) may be unintelligible or even misleading. To… Expand

Figures from this paper

Adaptive wavelet distillation from neural networks through interpretations
TLDR
Adapt wavelet distillation (AWD) is proposed, a method which aims to distill information from a trained neural network into a wavelet transform and yields a scientifically interpretable and concise model which gives predictive performance better than state-of-the-art neural networks. Expand
Interpreting and improving deep-learning models with reality checks
Recent deep-learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. This chapter covers recent workExpand
Matched sample selection with GANs for mitigating attribute confounding
TLDR
This work proposes a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes, and demonstrates the work in the context of gender bias in multiple open-source facial-recognition classifiers and finds that bias persists after removing key confounders via matching. Expand
H UMAN-INTERPRETABLE MODEL EXPLAINABILITY ON HIGH-DIMENSIONAL DATA
  • 2020
The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex. Unique challenges arise when aExpand
Human-interpretable model explainability on high-dimensional data
TLDR
This work introduces a framework for human-interpretable explainability on high-dimensional data, consisting of two modules, which adapt the Shapley paradigm for model-agnostic explainability to operate on latent features of a model's input features. Expand
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany ofExpand

References

SHOWING 1-10 OF 31 REFERENCES
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMsExpand
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany ofExpand
Constraining neutrino mass with tomographic weak lensing peak counts
Massive cosmic neutrinos change the structure formation history by suppressing perturbations on small scales. Weak lensing data from galaxy surveys probe the structure evolution and thereby can beExpand
Cosmological constraints with deep learning from KiDS-450 weak lensing maps
Convolutional neural networks (CNNs) have recently been demonstrated on synthetic data to improve upon the precision of cosmological inference. In particular, they have the potential to yield moreExpand
Definitions, methods, and applications in interpretable machine learning
TLDR
This work defines interpretability in the context of machine learning and introduces the predictive, descriptive, relevant (PDR) framework for discussing interpretations, and introduces 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy. Expand
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
TLDR
Disentangled Attribution Curves (DAC), a method to provide interpretations of tree ensemble methods in the form of (multivariate) feature importance curves, is introduced and validated on real data by showing that the curves can be used to increase the accuracy of logistic regression while maintaining interpretability. Expand
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
TLDR
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand
Hierarchical interpretations for neural network predictions
TLDR
This work introduces the use of hierarchical interpretations to explain DNN predictions through the proposed method, agglomerative contextual decomposition (ACD), and demonstrates that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN's outputs. Expand
Interpretable Machine Learning
Interpretable machine learning has become a popular research direction as deep neural networks (DNNs) have become more powerful and their applications more mainstream, yet DNNs remain difficult toExpand
Weak lensing cosmology with convolutional neural networks on noisy data
Weak gravitational lensing is one of the most promising cosmological probes of the late universe. Several large ongoing (DES, KiDS, HSC) and planned (LSST, EUCLID, WFIRST) astronomical surveysExpand
...
1
2
3
4
...