La Era
Apr 16, 2026 · Updated 08:27 AM UTC
Technology

AI models produce repetitive creative outputs compared to humans, Duke study finds

New research reveals that while individual large language models can be creative, the collective outputs of various AI systems are significantly more homogenized than human responses.

Tomás Herrera

2 min read

AI models produce repetitive creative outputs compared to humans, Duke study finds
AI models produce repetitive creative outputs compared to humans, Duke study finds

Researchers at Duke University have found that commercial large language models (LLMs) produce creative outputs that are much more similar to one another than those produced by humans.

The study, published March 24 in PNAS Nexus, analyzed 22 different LLMs against a group of over 100 human participants using three standard creativity assessments.

Emily Wenger, the Cue Family Assistant Professor of Electrical and Computer Engineering at Duke, noted that while users may believe different models offer unique creative directions, the data suggests otherwise.

“This paper basically says no. LLMs are less creative as a population than humans,” Wenger said.

The homogenization of AI

The researchers used tests such as the Alternative Uses Test and the Divergent Association Task to measure divergent thinking. While individual AI models occasionally outperformed individual humans in specific tasks, the variety of responses across the entire group of AI models was strikingly low.

Yoed Kenett, an associate professor at the Technion – Israel Institute of Technology, observed that the problem lies in the lack of variability.

“While LLMs appear to generate extremely original outputs, they are overly homogenized and not variable in their responses,” Kenett said.

The study suggests that because most commercial LLMs are trained on the same massive internet datasets, they gravitate toward a shared set of linguistic patterns. This convergence could potentially narrow the scope of human creative expression if the tools are overused.

Wenger warned that heavy reliance on these models could lead to a global smoothing of language and ideas.

“Over reliance on these tools will smooth the world’s work toward the same underlying set of words or grammar, tending to make writing all look the same,” Wenger said.

To maintain originality in product or concept development, Wenger recommends prioritizing human brainstorming over AI-generated suggestions.

Comments

Comments are stored locally in your browser.