Anthropic CEO: AI could be more factually reliable than people in structured tasks

Anthropic CEO Dario Amodei claims that modern AI models may surpass humans in factual accuracy in structured scenarios. He noted that AI, particularly the Claude series, tends to hallucinate less often than humans when answering specific factual questions.

Artificial intelligence may now surpass humans in factual accuracy—at least in certain structured scenarios—according to Anthropic CEO Dario Amodei. Speaking at two major tech events this month, VivaTech 2025 in Paris and the inauguralCode With Claude developer day, Amodei asserted that modern AI models, including the newly launched Claude 4 series, may hallucinate less often than people when answering well-defined factual questions, reported Business Today.

Hallucination, in the context of AI, refers to the tendency of models to confidently produce inaccurate or fabricated information, the report added. This longstanding flaw has raised concerns in fields such as journalism, medicine, and law. However, Amodei’s remarks suggest that the tables may be turning—at least in controlled conditions.

“If you define hallucination as confidently stating something incorrect, humans actually do that quite frequently,” Amodei said during his keynote at VivaTech. He cited internal testing which showed Claude 3.5 outperforming human participants on structured factual quizzes. The results, he claimed, demonstrate a notable shift in reliability when it comes to straightforward question-answer tasks.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *