What Are LLMs, Really?

5 hr 10 min · 6 pieces

Before you can ask whether a language model is conscious, you need to understand what a language model is. That turns out to be surprisingly unsettled.

The field's understanding has evolved rapidly — from "stochastic parrots" (statistical pattern-matchers with no understanding) to "simulators" (systems that generate possible agents) to "persona selectors" (systems whose training selects for whole characters, not just behaviors). Each reframe changes what consciousness would even mean for these systems. A parrot can't be conscious. A simulator raises different questions than an agent. A persona selector raises questions nobody anticipated.

This path follows that evolution through six pieces, each building on the last. It's the most technically substantive path in the library — the later pieces are academic papers — but the journey starts accessible and the difficulty earns itself. By the end, you'll understand why the people closest to these systems can't agree on what they are, and why that disagreement matters for every other question the garden holds.