Retour aux articles
IAOpenAI News

Disrupting malicious uses of AI | February 2026

Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

February 25, 2026

Disrupting malicious uses of AI

Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI.

In the two years since we began publishing these threat reports, we have gained important insights into the ways threat actors attempt to abuse AI models. In particular, the case studies in this report, as in our earlier reports, illustrate how threat actors typically use AI in combination with other, more traditional tools such as websites and social media accounts. Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model. Rather, threat actors may use different AI models at various points in their operational workflow. We share these insights in our threat reports so that our industry, and wider society, can be better placed to identify and avoid such threats.

Read the full report here ⁠ (opens in a new window) .

  • 2026

Author

Keep reading

Running Codex safely at OpenAI > Cover Image

Security May 8, 2026

Scaling Trusted Access for Cyber with GPT-5.5 and GPT-5.5-Cyber > Art Card Image

Security May 7, 2026

Introducing Advanced Account Security

Product Apr 30, 2026

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale