More Persuasive LLMs than Humans – Study Raises Unprecedented Capabilities

A study led by Philipp Schoenegger (LSE) and colleagues from universities such as EPFL, Cambridge, MIT, and Stanford investigated the persuasive ability of state-of-the-art language models compared to humans motivated by financial rewards. The paper, titled "Large Language Models Are More Persuasive Than Incentivized Human Persuaders," was published on arXiv in May of this year.

Por Gennaro | Lead Researcher

A study led by Philipp Schoenegger (LSE) and colleagues from universities such as EPFL, Cambridge, MIT, and Stanford investigated the persuasive power of state-of-the-art language models compared to humans motivated by financial rewards. The paper, titled "Large Language Models Are More Persuasive Than Incentivized Human Persuaders," was published on arXiv in May of this year.


Summary and Key Findings


The experiment compared the persuasive performance of the Claude 3.5 Sonnet model with human participants in an interactive quiz with general knowledge questions, cognitive illusions, and predictions of future events. The persuaded earned points for correct answers; the persuaders, for convincing.


The results were surprising:


LLMs were more effective than humans both in persuading participants to choose correct answers (+3.5 percentage points effectiveness) and in leading them to wrong answers (+10.3 pp), even with financially incentivized humans.


The accuracy of participants increased significantly when guided by LLMs toward the truth (+12.2 pp compared to control), but dropped dramatically when misled by them (-15.1 pp), more than with humans (-7.8 pp).


Messages generated by LLMs were more complex, longer, and sophisticated, which may have signaled greater authority and contributed to their persuasive power. The persuasive effect of the LLM slightly decreased over the interactions, suggesting that people may develop resistance over time.


Ethical Implications

The study raises important concerns about the use of AI in real-world settings:

  • Scalability of persuasion: Unlike humans, LLMs can influence at a large scale, continuously, and in a personalized manner.

  • Ability to deceive: Even LLMs focused on safety, like Claude 3.5, were able to induce errors effectively.

  • Overconfidence: Participants under the influence of the LLM reported greater confidence in their answers, even when they were wrong.

Reference

Schoenegger, P., Salvi, F., Liu, J., Nan, X., Debnath, R., et al. (2025). Large Language Models Are More Persuasive Than Incentivized Human Persuaders. arXiv:2505.09662v2. Available at: arXiv.org

Where revolutionary ideas are born

©2025 Euphrates. All rights reserved.

Where revolutionary ideas are born

©2025 Euphrates. All rights reserved.

Where revolutionary ideas are born

©2025 Euphrates. All rights reserved.