# DeepMoM: Robust Deep Learning With Median-of-Means

@article{Huang2022DeepMoMRD,
title={DeepMoM: Robust Deep Learning With Median-of-Means},
author={Shih-Ting Huang and Johannes Lederer},
journal={ArXiv},
year={2022},
volume={abs/2105.14035}
}
• Published 28 May 2021
• Computer Science
• ArXiv
Data used in deep learning is notoriously problematic. For example, data are usually combined from diverse sources, rarely cleaned and vetted thoroughly, and sometimes corrupted on purpose. Intentional corruption that targets the weak spots of algorithms has been studied extensively under the label of “adversarial attacks.” In contrast, the arguably much more common case of corruption that reflects the limited quality of data has been studied much less. Such “random” corruptions are due to…

## References

SHOWING 1-10 OF 69 REFERENCES

### Towards Deep Learning Models Resistant to Adversarial Attacks

• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

### Provable defenses against adversarial examples via the convex outer adversarial polytope

• Computer Science
ICML
• 2018
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

### Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

• Computer Science
NeurIPS
• 2019
It is demonstrated through extensive experimentation that this method consistently outperforms all existing provably $\ell-2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\ell_ 2$-defenses.

### Towards Robust Deep Neural Networks

• Computer Science
ArXiv
• 2018
Experimental results indicate that the proposed method outperforms state-of-the-art sensitivity-based learning approaches with regards to robustness to adversarial attacks and the introduced framework achieves competitive overall performance relative to methods that do.

### One Pixel Attack for Fooling Deep Neural Networks

• Computer Science
IEEE Transactions on Evolutionary Computation
• 2019
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.

### Probabilistic End-To-End Noise Correction for Learning With Noisy Labels

• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
An end-to-end framework called PENCIL, which can update both network parameters and label estimations as label distributions and is more general and robust than existing methods and is easy to apply.

### Unsupervised label noise modeling and loss correction

• Computer Science
ICML
• 2019
A suitable two-component mixture model is suggested as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled and correct the loss by relying on the network prediction.

### MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels

• Computer Science
ICML
• 2018
Experimental results demonstrate that the proposed novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet, can significantly improve the generalization performance of deep networks trained on corrupted training data.

### Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

• Computer Science
2016 IEEE Symposium on Security and Privacy (SP)
• 2016
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.

### Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

• Computer Science
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework.