Back to articles
AIOpenAI News

OpenAI’s commitment to child safety: adopting safety by design principles

OpenAI April 23, 2024 Safety OpenAI’s commitment to child safety: adopting safety by design principles We’re joining Thorn, All Tech Is Human, and other leading companies in an effort to prevent the misuse of gene...

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

April 23, 2024

OpenAI’s commitment to child safety: adopting safety by design principles

We’re joining Thorn, All Tech Is Human, and other leading companies in an effort to prevent the misuse of generative AI to perpetrate, proliferate, and further sexual harms against children.

A soft, abstract image with a pale beige background and a subtle blur of warm colors, including hints of yellow, pink, and orange. The gentle, smooth gradients create a delicate and dreamy atmosphere, with no distinct shapes or forms.

OpenAI, alongside industry leaders including Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, has committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies as articulated in the Safety by Design principles. This initiative, led by Thorn ⁠ (opens in a new window) , a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human ⁠ (opens in a new window) , an organization dedicated to tackling tech and society's complex problems, aims to mitigate the risks generative AI poses to children. By adopting comprehensive Safety by Design principles, OpenAI and our peers are ensuring that child safety is prioritized at every stage in the development of AI. To date, we have made significant effort to minimize the potential for our models to generate content that harms children, set age restrictions for ChatGPT, and actively engage with the National Center for Missing and Exploited Children (NCMEC), Tech Coalition, and other government and industry stakeholders on child protection issues and enhancements to reporting mechanisms.

As part of this Safety by Design effort, we commit to:

  1. Develop: Develop, build, and train generative AI models that proactively address child safety risks. Responsibly source our training datasets, detect and remove child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from training data, and report any confirmed CSAM to the relevant authorities.
  2. Incorporate feedback loops and iterative stress-testing strategies in our development process.
  3. Deploy solutions to address adversarial misuse.
  • Combat and respond to abusive content and conduct, and incorporate prevention efforts.
  • Encourage developer ownership in safety by design.
  • Committed to removing new AIG-CSAM generated by bad actors from our platform.
  • Invest in research and future technology solutions.
  • Fight CSAM, AIG-CSAM and CSEM on our platforms.

This commitment marks an important step in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As part of the working group, we have also agreed to release progress updates every year.

“We care deeply about the safety and responsible use of our tools, which is why we’ve built strong guardrails and safety measures into ChatGPT and DALL-E. We are committed to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles and continue our work in mitigating potential harms to children.”

This collective action underscores our shared approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society. Thorn has published the principles at https://teamthorn.co/gen-ai ⁠ (opens in a new window)

Related research

Image de l'article

Publication Jan 31, 2024

Weak To Strong Generalization

Safety Dec 14, 2023

Practices For Governing Agentic AI Systems

Publication Dec 14, 2023

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

OpenAI News - openai.com

View original publication