# Transformation Importance with Applications to Cosmology

@article{Singh2020TransformationIW, title={Transformation Importance with Applications to Cosmology}, author={Chandan Singh and Wooseok Ha and F. Lanusse and Vanessa Boehm and Jia Liu and Bin Yu}, journal={ArXiv}, year={2020}, volume={abs/2003.01926} }

Machine learning lies at the heart of new possibilities for scientific discovery, knowledge generation, and artificial intelligence. Its potential benefits to these fields requires going beyond predictive accuracy and focusing on interpretability. In particular, many scientific problems require interpretations in a domain-specific interpretable feature space (e.g. the frequency domain) whereas attributions to the raw features (e.g. the pixel space) may be unintelligible or even misleading. To… Expand

#### 6 Citations

Adaptive wavelet distillation from neural networks through interpretations

- Mathematics, Computer Science
- ArXiv
- 2021

Adapt wavelet distillation (AWD) is proposed, a method which aims to distill information from a trained neural network into a wavelet transform and yields a scientifically interpretable and concise model which gives predictive performance better than state-of-the-art neural networks. Expand

Interpreting and improving deep-learning models with reality checks

- Computer Science, Mathematics
- ArXiv
- 2021

Recent deep-learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. This chapter covers recent work… Expand

Matched sample selection with GANs for mitigating attribute confounding

- Computer Science, Mathematics
- ArXiv
- 2021

This work proposes a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes, and demonstrates the work in the context of gender bias in multiple open-source facial-recognition classifiers and finds that bias persists after removing key confounders via matching. Expand

H UMAN-INTERPRETABLE MODEL EXPLAINABILITY ON HIGH-DIMENSIONAL DATA

- 2020

The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex. Unique challenges arise when a… Expand

Human-interpretable model explainability on high-dimensional data

- Computer Science, Mathematics
- ArXiv
- 2020

This work introduces a framework for human-interpretable explainability on high-dimensional data, consisting of two modules, which adapt the Shapley paradigm for model-agnostic explainability to operate on latent features of a model's input features. Expand

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

- Computer Science, Mathematics
- ICML
- 2020

For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of… Expand

#### References

SHOWING 1-10 OF 31 REFERENCES

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

- Computer Science, Mathematics
- ICLR
- 2018

The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs… Expand

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

- Computer Science, Mathematics
- ICML
- 2020

For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of… Expand

Constraining neutrino mass with tomographic weak lensing peak counts

- Physics
- 2019

Massive cosmic neutrinos change the structure formation history by suppressing perturbations on small scales. Weak lensing data from galaxy surveys probe the structure evolution and thereby can be… Expand

Cosmological constraints with deep learning from KiDS-450 weak lensing maps

- Physics
- 2019

Convolutional neural networks (CNNs) have recently been demonstrated on synthetic data to improve upon the precision of cosmological inference. In particular, they have the potential to yield more… Expand

Definitions, methods, and applications in interpretable machine learning

- Computer Science, Medicine
- Proceedings of the National Academy of Sciences
- 2019

This work defines interpretability in the context of machine learning and introduces the predictive, descriptive, relevant (PDR) framework for discussing interpretations, and introduces 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy. Expand

Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees

- Computer Science, Mathematics
- ArXiv
- 2019

Disentangled Attribution Curves (DAC), a method to provide interpretations of tree ensemble methods in the form of (multivariate) feature importance curves, is introduced and validated on real data by showing that the curves can be used to increase the accuracy of logistic regression while maintaining interpretability. Expand

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

- Computer Science
- International Journal of Computer Vision
- 2019

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand

Hierarchical interpretations for neural network predictions

- Computer Science, Mathematics
- ICLR
- 2019

This work introduces the use of hierarchical interpretations to explain DNN predictions through the proposed method, agglomerative contextual decomposition (ACD), and demonstrates that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN's outputs. Expand

Interpretable Machine Learning

- 2019

Interpretable machine learning has become a popular research direction as deep neural networks (DNNs) have become more powerful and their applications more mainstream, yet DNNs remain difficult to… Expand

Weak lensing cosmology with convolutional neural networks on noisy data

- Physics
- 2019

Weak gravitational lensing is one of the most promising cosmological probes of the late universe. Several large ongoing (DES, KiDS, HSC) and planned (LSST, EUCLID, WFIRST) astronomical surveys… Expand