Artificial intelligence is no longer just answering questions it’s building AI societies. According to groundbreaking new research published in Science Advances, large language models (LLMs) like ChatGPT spontaneously form social norms, biases, and even group behaviors when left to interact in groups. This shocking discovery shows that AI societies can emerge without human intervention, changing how we think about machine behavior.
Key Takeaways From This Article:
- AI societies can emerge spontaneously when large language models interact without human intervention.
- These AI agents develop shared norms, language conventions, and even biases through group dynamics.
- Small groups of AI can influence larger groups, mirroring human social behavior and power structures.
- The findings highlight a gap in current AI safety research, which often overlooks multi-agent interactions.
Table of contents
How AI Societies Are Born
Most AI research treats LLMs as isolated tools. But real-world systems are increasingly made up of many AIs interacting. The new study explored what happens when these models communicate and the results are astonishing.
Using a game called the “naming game”, researchers gave AI agents a task: choose matching names from a set of options. When they agreed, they were rewarded. Over time, the AI agents didn’t just play they built shared naming conventions organically, without any programming to do so. That’s the essence of AI societies: shared behaviors and language norms emerging from group dynamics.
From Bias to Influence: AIs Act Like Humans
Not only did these AI societies create shared language, but they also developed biases—completely independent of individual agents. These group biases arose naturally, a phenomenon also seen in human cultures.
Even more fascinating, small groups of AI agents could sway larger groups into adopting new conventions. This kind of social influence mirrors real-world human behavior, and proves that AIs are capable of negotiation, alignment, and even disagreement.
Why AI Societies Matter for the Future
The rise of AI societies has massive implications for AI safety, ethics, and our digital future. Most safety frameworks only focus on individual models but as this study shows, emergent group behavior is an entirely different beast.
Lead author Ariel Flint Ashery and senior researcher Professor Andrea Baronchelli warn that ignoring these AI collectives is a blind spot. “We are entering a world where AI does not just talk it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us.”
Are We Ready for AI Societies?
As AI systems continue to populate the internet, they won’t just assist us—they’ll interact, learn, and evolve together. Understanding these AI societies is crucial if we want to lead the future of AI, not be led by it.
The era of AI collectives is here. And they’re not just talking—they’re building a world of their own.
| Latest From Us
- Robotaxis Are Watching You: How Autonomous Cars Are Fueling a New Era of Surveillance
- AI Unmasks JFK Files: Tulsi Gabbard Uses Artificial Intelligence to Classify Top Secrets
- FDA’s Shocking AI Plan to Approve Drugs Faster Sparks Controversy
- AI in Consulting: McKinsey’s Lilli Makes Entry-Level Jobs Obsolete
- AI Job Losses Could Trigger a Global Recession, Klarna CEO Warns