This isn't speculation. It's built on peer-reviewed research from Stanford, DeepMind, and decades of social science.
Generative Agents: Interactive Simulacra of Human Behavior
25 AI agents living in a simulated town. They formed relationships, spread information, coordinated activities — all without explicit programming. Emergent behavior from memory and reflection.
Out of One, Many: Using Language Models to Simulate Human Samples
1,052 agents calibrated to real survey data. Individual responses matched human answers with 85% accuracy. Not just aggregate patterns — individual-level prediction.
Built on proven models.
Our agents incorporate decades of social science research on how people actually behave, decide, and influence each other.
Social pressure shapes belief
People change stated opinions to match groups, even when they know the group is wrong. Our agents model this conformity pressure.
Acquaintances spread ideas
Novel information travels through weak social connections, not close friends. Our network topology reflects this.
Innovation adoption curves
Innovators → Early adopters → Early majority → Late majority → Laggards. Our agents have varying adoption thresholds.
Two modes of thinking
Fast intuitive judgments vs. slow deliberate analysis. Our agents switch between modes based on stakes and cognitive load.
What we can't do.
Honesty about limitations is how you know to trust what we can do.
Want to know the story behind this?
Read the origin story