

We address this question using a novel method - generative social simulation - that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices - creating a ‘social media prism’ that distorts political discourse.
- lmao “generative AI chatbots trained on social media behave like social media users”
- Shame on ArsTechnica they should be better than to publish a study like this.
It worries me that there are scientists out there who are making studies based on the assumption that an LLM chat bot is a reasonable stand-in for a human in this context (in any context really but that’s another conversation). It’s just not what LLMs are, it’s not what they are designed to do. They’ve fallen for the marketing.