• Corpus ID: 246442081

Datamodels: Predicting Predictions from Training Data

@article{Ilyas2022DatamodelsPP,
  title={Datamodels: Predicting Predictions from Training Data},
  author={Andrew Ilyas and Sung Min Park and Logan Engstrom and Guillaume Leclerc and Aleksander Madry},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.00622}
}
We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data. For any fixed “target” example x, training set S, and learning algorithm, a datamodel is a parameterized function 2S → R that for any subset of S′ ⊂ S—using only information about which examples of S are contained in S′—predicts the outcome of training a model on S′ and evaluating on x. Despite the potential complexity of the underlying process being approximated (e.g… 
Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments
TLDR
A new, principled algorithm for estimating the contribution of training data points to the behavior of a deep learning model, such as a specific prediction it makes, improving upon the best prior Shapley value estimators.
Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation
TLDR
This framework develops a framework called TransTEE where attention layers govern interactions among treatments and covariates to exploit structural similarities of POs for confounding control and can serve as a general-purpose treatment effect estimator which outperforms competitive baselines on a variety of challenging TEE problems.
Interpolating Compressed Parameter Subspaces
TLDR
The utility of CPS is demonstrated for single and multiple test-time distribution settings, with improved mappings between the two spaces with higher accuracy, improved robustness performance across perturbation types, reduced catastrophic forgetting on Split-CIFAR10/100, strong capacity for multi-task solutions and unseen/distant tasks, and storage-efficient inference (ensembling, hypernetworks).
Distilling Model Failures as Directions in Latent Space
TLDR
This work presents a scalable method for automatically distilling a model’s failure modes and harnesses linear classifiers to identify consistent error patterns, and induces a natural representation of these failure modes as directions within the feature space.
Data Errors: Symptoms, Causes and Origins
TLDR
A vision for automating data disposal – disposal by design – which takes into account processing constraints, regulatory constraints as well as storage constraints, is presented, and three concrete examples which address aspects of this vision are given.
The Privacy Onion Effect: Memorization is Relative
TLDR
An Onion Effect of memorization is demonstrated and analysed: removing the “layer” of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack.
Can Backdoor Attacks Survive Time-Varying Models?
TLDR
The results show that one-shot backdoor attacks do not survive past a few model updates, even when attackers aggressively increase trigger size and poison ratio, and indicate that the larger the distribution shift between old and new training data, the faster backdoors are forgotten.

References

SHOWING 1-10 OF 96 REFERENCES
A Unified Approach to Interpreting Model Predictions
TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
Selection Via Proxy: Efficient Data Selection For Deep Learning
TLDR
This work shows that it can significantly improve the computational efficiency of data selection in deep learning by using a much smaller proxy model to perform data selection for tasks that will eventually require a large target model (e.g., selecting data points to label for active learning).
Distributional Generalization: A New Kind of Generalization
We introduce a new notion of generalization -- Distributional Generalization -- which roughly states that outputs of a classifier at train and test time are close *as distributions*, as opposed to
Understanding Black-box Predictions via Influence Functions
TLDR
This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
An Empirical Comparison of Instance Attribution Methods for NLP
TLDR
It is found that simple retrieval methods yield training instances that differ from those identified via gradient-based methods (such as IFs), but that nonetheless exhibit desirable characteristics similar to more complex attribution methods.
Underspecification Presents Challenges for Credibility in Modern Machine Learning
TLDR
This work shows the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain, and shows that this problem appears in a wide variety of practical ML pipelines.
Influence Functions in Deep Learning Are Fragile
TLDR
It is suggested that in general influence functions in deep learning are fragile and call for developing improved influence estimation methods to mitigate these issues in non-convex setups.
Deep learning: a statistical viewpoint
TLDR
This article surveys recent progress in statistical learning theory that provides examples illustrating these principles in simpler settings, and focuses specifically on the linear regime for neural networks, where the network can be approximated by a linear model.
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
TLDR
The experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks and provide quantitative and visually compelling evidence for the theory put forth in Feldman (2019), which proposes a theoretical explanation for this phenomenon.
An Empirical Study of Example Forgetting during Deep Neural Network Learning
TLDR
It is found that certain examples are forgotten with high frequency, and some not at all; a data set’s (un)forgettable examples generalize across neural architectures; and a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
...
...