Corpus ID: 211987285

Transformation Importance with Applications to Cosmology

@article{Singh2020TransformationIW,
  title={Transformation Importance with Applications to Cosmology},
  author={Chandan Singh and Wooseok Ha and F. Lanusse and Vanessa Boehm and Jia Liu and Bin Yu},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.01926}
}
Machine learning lies at the heart of new possibilities for scientific discovery, knowledge generation, and artificial intelligence. Its potential benefits to these fields requires going beyond predictive accuracy and focusing on interpretability. In particular, many scientific problems require interpretations in a domain-specific interpretable feature space (e.g. the frequency domain) whereas attributions to the raw features (e.g. the pixel space) may be unintelligible or even misleading. To… Expand

Figures from this paper

Adaptive wavelet distillation from neural networks through interpretations
Matched sample selection with GANs for mitigating attribute confounding
H UMAN-INTERPRETABLE MODEL EXPLAINABILITY ON HIGH-DIMENSIONAL DATA
  • 2020
Human-interpretable model explainability on high-dimensional data

References

SHOWING 1-10 OF 31 REFERENCES
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Constraining neutrino mass with tomographic weak lensing peak counts
Definitions, methods, and applications in interpretable machine learning
Hierarchical interpretations for neural network predictions
...
1
2
3
4
...