# Are Labels Required for Improving Adversarial Robustness?

@inproceedings{Uesato2019AreLR, title={Are Labels Required for Improving Adversarial Robustness?}, author={Jonathan Uesato and Jean-Baptiste Alayrac and Po-Sen Huang and Robert Stanforth and Alhussein Fawzi and Pushmeet Kohli}, booktitle={NeurIPS}, year={2019} }

Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. [...] Key Method On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7% over using 4K supervised examples alone, and captures over 95% of the improvement from the same number of labeled… Expand

#### 160 Citations

Adversarially Robust Generalization Just Requires More Unlabeled Data

- Computer Science, Mathematics
- ArXiv
- 2019

It is proved that for a specific Gaussian mixture problem illustrated by [35], adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Expand

Unlabeled Data Improves Adversarial Robustness

- Computer Science, Mathematics
- NeurIPS
- 2019

It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Expand

Robustness to Adversarial Perturbations in Learning from Incomplete Data

- Computer Science, Mathematics
- NeurIPS
- 2019

A generalization theory is developed for Semi-Supervised Learning and Distributionally Robust Learning based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Expand

ARMOURED: ADVERSARIALLY ROBUST MODELS

- 2020

Adversarial attacks pose a major challenge for modern deep neural networks. Recent advancements show that adversarially robust generalization requires a huge amount of labeled data for training. If… Expand

Overfitting in adversarially robust deep learning

- Computer Science, Mathematics
- ICML
- 2020

It is found that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CifAR-10, CIFAR-100, and ImageNet) and perturbation models. Expand

S ELF-SUPERVISED A DVERSARIAL R OBUSTNESS FOR THE L OW-LABEL , H IGH-DATA R EGIME

- 2021

Recent work discovered that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. Perhaps more… Expand

Robust Pre-Training by Adversarial Contrastive Learning

- Computer Science
- NeurIPS
- 2020

This work improves robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations, and shows that ACL pre- training can improve semi- supervised adversarial training, even when only a few labeled examples are available. Expand

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

- Computer Science, Mathematics
- ArXiv
- 2019

This paper believes that the quality of pseudo labels is the bottleneck of adversarial learning with unlabeled data, and proposes robust co-training (RCT), which trains two deep networks and encourages two networks diverged by exploiting peer's adversarial examples. Expand

Improving Robustness using Generated Data

- Computer Science, Mathematics
- ArXiv
- 2021

It is demonstrated that it is possible to significantly reduce the robust-accuracy gap to models trained with additional real data, and even the addition of non-realistic random data (generated by Gaussian sampling) can improve robustness. Expand

A large amount of attacking methods on generating adversarial examples have been introduced in recent years ( Carlini & Wagner , 2017 a

- 2019

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train… Expand

#### References

SHOWING 1-10 OF 58 REFERENCES

Adversarially Robust Generalization Just Requires More Unlabeled Data

- Computer Science, Mathematics
- ArXiv
- 2019

It is proved that for a specific Gaussian mixture problem illustrated by [35], adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Expand

Unlabeled Data Improves Adversarial Robustness

- Computer Science, Mathematics
- NeurIPS
- 2019

It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Expand

Robustness to Adversarial Perturbations in Learning from Incomplete Data

- Computer Science, Mathematics
- NeurIPS
- 2019

A generalization theory is developed for Semi-Supervised Learning and Distributionally Robust Learning based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Expand

Rademacher Complexity for Adversarially Robust Generalization

- Computer Science, Mathematics
- ICML
- 2019

For binary linear classifiers, it is shown that the adversarial Rademacher complexity is never smaller than its natural counterpart, and it has an unavoidable dimension dependence, unless the weight vector has bounded $\ell_1$ norm. Expand

Adversarially Robust Generalization Requires More Data

- Computer Science, Mathematics
- NeurIPS
- 2018

It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. Expand

Improved generalization bounds for robust learning

- Computer Science, Mathematics
- ALT
- 2019

A model of robust learning in an adversarial environment where the learner gets uncorrupted training data with access to possible corruptions that may be affected by the adversary during testing, to build a robust classifier that would be tested on future adversarial examples is considered. Expand

Scaling provable adversarial defenses

- Computer Science, Mathematics
- NeurIPS
- 2018

This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models. Expand

Ensemble Adversarial Training: Attacks and Defenses

- Computer Science, Mathematics
- ICLR
- 2018

This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. Expand

Adversarial Machine Learning at Scale

- Computer Science, Mathematics
- ICLR
- 2017

This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples. Expand

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

- Computer Science, Mathematics
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2019

A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10. Expand