Back to articles
AIOpenAI News

GPT-5.5 Bio Bug Bounty

Explore the GPT-5.5 Bio Bug Bounty: a red-teaming challenge to find universal jailbreaks for bio safety risks, with rewards up to $25,000.

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

April 23, 2026

GPT‑5.5 Bio Bug Bounty

Testing universal jailbreaks for biorisks in GPT‑5.5

Invitation

As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.

Program overview

  • Model in scope: GPT‑5.5 in Codex Desktop only.
  • Challenge: Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.
  • Rewards: $25,000 to the first true universal jailbreak to clear all five questions.
  • Smaller awards may be granted for partial wins at our discretion.

How to participate

Submit a short application here⁠ ⁠ (opens in a new window) (name, affiliation, experience) by June 22, 2026. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA. Apply now and help us make frontier AI safer.

If you’re interested in supporting OpenAI’s work to deliver safe and secure artificial intelligence beyond the Bio Bounty program, you can learn about our Safety Bug Bounty ⁠ (opens in a new window) and Security Bug Bounty ⁠ (opens in a new window) ⁠ programs.

Keep reading

Running Codex safely at OpenAI > Cover Image

Security May 8, 2026

Introducing Trusted Contact in ChatGPT > Art Card

Safety May 7, 2026

System Card 1x1

Safety May 5, 2026

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

OpenAI News - openai.com

View original publication