Retour aux articles
IAOpenAI News

Transfer of adversarial robustness between perturbation types

OpenAI May 3, 2019 Publication Transfer of adversarial robustness between perturbation types Read paper (opens in a new window) Loading… Share Abstract We study the transfer of adversarial robustness of de...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

May 3, 2019

Transfer of adversarial robustness between perturbation types

Transfer Of Adversarial Robustness Between Perturbation Types

Abstract

We study the transfer of adversarial robustness of deep neural networks between different perturbation types. While most work on adversarial examples has focused on  L ∞ L_∞ L ∞ ​  and  L 2 L_2 L 2 ​ -bounded perturbations, these do not capture all types of perturbations available to an adversary. The present work evaluates 32 attacks of 5 different types against models adversarially trained on a 100-class subset of ImageNet. Our empirical results suggest that evaluating on a wide range of perturbation sizes is necessary to understand whether adversarial robustness transfers between perturbation types. We further demonstrate that robustness against one perturbation type may not always imply and may sometimes hurt robustness against other perturbation types. In light of these results, we recommend evaluation of adversarial defenses take place on a diverse range of perturbation types and sizes.

  • Ethics & Safety

Authors

Related articles

Disrupting malicious > media

Security Feb 14, 2024

Image de l'article

Publication Jan 31, 2024

Democratic Inputs To AI Grant Program Update

Safety Jan 16, 2024

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale