• Corpus ID: 232307852

Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions

@article{Suzuki2021DataCF,
  title={Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions},
  author={Kenji Suzuki and Yoshiyuki Kobayashi and Takuya Narihira},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.11807}
}
Identifying the influence of training data for data cleansing can improve the accuracy of deep learning. An approach with stochastic gradient descent (SGD) called SGD-influence to calculate the influence scores was proposed, but, the calculation costs are expensive. It is necessary to temporally store the parameters of the model during training phase for inference phase to calculate influence sores. In close connection with the previous method, we propose a method to reduce cache files to store… 
1 Citations

Figures and Tables from this paper

Understanding Instance-based Interpretability of Variational Auto-Encoders

This paper investigates influence functions, a popular instance-based interpretation method, for a class of deep generative models called variational auto-encoders (VAE), and formally frame the counter-factual question answered by influence functions in this setting, and through theoretical analysis, examines what they reveal about the impact of training samples on classical unsupervised learning methods.

References

SHOWING 1-8 OF 8 REFERENCES

Data Cleansing for Models Trained with SGD

This paper proposes an algorithm that can suggest influential instances without using any domain knowledge, and infers the influential instances by retracing the steps of the SGD while incorporating intermediate models computed in each step.

Understanding Black-box Predictions via Influence Functions

This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression

Traditionally, most of the effort in fitting full rank linear regression models has centered on the study of the presence, strength and form of relationships between the measured variables. As is now

Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives

This paper introduces Neural Network Libraries2, a deep learning framework designed from engineer's perspective, with emphasis on usability and compatibility as its core design principles, and elaborate on each of its design principles and its merits.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.