Back to articles
AIOpenAI News

Adversarial attacks on neural network policies

OpenAI February 8, 2017 Publication Adversarial attacks on neural network policies Read paper (opens in a new window) Loading… Share Abstract Machine learning classifiers are known to be vulnerable to inpu...

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

February 8, 2017

Adversarial attacks on neural network policies

Adversarial Attacks On Neural Network Policies

Abstract

Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at  this http URL ⁠ (opens in a new window) .

  • Ethics & Safety

Authors

Related articles

Disrupting malicious > media

Security Feb 14, 2024

Image de l'article

Publication Jan 31, 2024

Democratic Inputs To AI Grant Program Update

Safety Jan 16, 2024

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

OpenAI News - openai.com

View original publication