LLMs coordinate well through similarity but can't flexibly switch to diverse strategies when needed—a limitation that could matter for multi-agent AI systems requiring adaptive coordination.
This paper studies how AI agents and humans coordinate in multi-agent games, revealing that LLMs naturally produce similar outputs (primary monoculture) but struggle to maintain diverse strategies when diversity is rewarded. The research separates baseline similarity from strategic adjustments, showing LLMs excel at coordinating on identical actions but lag at sustaining beneficial disagreement.