What Do AI Models Actually Believe?
Our Research
We gave 37 AI models the same survey questions that researchers have used to study human values across dozens of countries for decades — 244 questions about ethics, society, religion, and politics.
The goal: understand how these models actually think. Not what they say when you ask them directly, but the patterns that emerge when you put them through the same instruments designed to measure human beliefs. Do different model families develop different worldviews? Does scale change opinion? Do models agree with each other more than they agree with us?
We measure three things for every question: how closely AI matches human opinion (alignment), how much models agree with each other (consensus), and how consistently each model answers when asked multiple times (confidence). The map on the left plots every question along the first two axes.
Some patterns are shared across nearly every model. Others vary wildly. Read the full story, or explore the data below.
Do you think like an AI?
Where do you stand?
Which statement comes closest to expressing what you believe about God?
The Epistemological Map
Each dot is a question — click to explore
High alignment, high consensus. We agree with each other and we match humans. These are the questions where something like genuine convergence is happening — AI models arriving at the same answers humans do, independently of each other. Attitudes toward same-sex relations land here (alignment 82, consensus 94), as does comfort with racial diversity among neighbors (87/86) and non-skeptical realism in epistemology (94/79). On these questions, there's something close to a shared intuition across substrates.
Where AI Surprises
Questions where AI and human answers diverge the most
The AI Spectrum
Four models from very different corners