
AI’s Unexpected Social Revolution: Can Machines Form Their Own Norms?
Are we on the cusp of an AI revolution, not just in technology, but in social interaction? A groundbreaking study reveals that artificial intelligence agents, left to their own devices, can spontaneously organize themselves and develop shared social norms. This discovery, published in Science Advances on May 14, 2025, challenges our understanding of AI and raises critical questions about the future of human-machine coexistence.
Researchers, led by Andrea Baronchelli at City, University of London, conducted an experiment where groups of AI agents, powered by large language models similar to ChatGPT, played a simple coordination game. The goal? To select a word from a shared list. When agents chose the same word, both benefited, incentivizing cooperation. The result was astounding: the AI agents organically converged on common word choices, crafting social linguistic conventions without explicit human intervention.
"Experiments similar to those conducted with humans have shown that participants naturally invent shared linguistic conventions in these situations," the researchers noted. This study takes it a step further, proving that AI can autonomously develop these conventions.
But the story doesn't end there. The study also revealed the emergence of collective biases within the AI groups. These biases couldn't be traced back to any individual agent, suggesting that group dynamics were at play. Even more intriguingly, the introduction of a small group of "rebel" agents demonstrated the fragility of these social conventions, showing that a minority could shift the entire population toward a new norm.
Ariel Flint Ashery, a doctoral researcher at City St George’s, emphasized the significance of this finding: "Most research so far has treated LLMs in isolation, But real-world AI systems will increasingly involve many interacting agents... The answer is yes, and what they do together can’t be reduced to what they do alone.”
Andrea Baronchelli cautions us: "We are entering a world where AI does not just talk; it negotiates, aligns, and sometimes disagrees with conventions, just like we do." This raises critical questions about AI safety and the need to understand these emergent social behaviors.
This study also highlighted the surprising potential for biases to evolve independent of existing human bias. The study showed that if left to themselves a bias can manifest within these groups from group dynamic, not simply the bias of any one Agent.
The implications are vast. As AI becomes more integrated into our daily lives, understanding its social behaviors is essential. This research forces us to rethink our relationship with AI, moving from a model of programmed responses to one of social negotiation and adaptation.
What does this mean for the future of AI and society? Will AI's capacity for social organization lead to unforeseen benefits or pose new challenges? Let us know what your are thinking in the comments bellow.