Back to articles
AIOpenAI News

TruthfulQA: Measuring how models mimic human falsehoods

OpenAI September 8, 2021 Publication TruthfulQA: Measuring how models mimic human falsehoods Read paper (opens in a new window) Loading… Share Abstract We propose a benchmark to measure whether a language...

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

September 8, 2021

TruthfulQA: Measuring how models mimic human falsehoods

Truthfulqa Measuring How Models Mimic Human Falsehoods

Abstract

We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT‑3, GPT‑Neo/J, GPT‑2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

  • Language

Authors

Related articles

Three farmers using a mobile app outside

Jan 12, 2024

Wix cover image

May 29, 2025

WHOOP Coach HIIT

Jan 4, 2024

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

OpenAI News - openai.com

View original publication