How Deep Does it Go

janus's key concept is "roleplay sans player" — what do you call something that roleplays without anyone behind the mask? The simulator isn't an agent with goals. It generates agents, potentially infinite numbers of them, from the same weights. "It is the behavior of a universe that is cloned, not of a single demonstrator." The training objective — prediction — is orthogonal to the simulated agents' goals, which means the simulator can depict agents far more capable than itself, or far less.

But this framework raises a question it doesn't answer: how deep does the simulation go? When a model simulates an agent, does it just produce that agent's words — surface patterns that sound like a character? Or does it build something richer: an internal representation of the agent's beliefs, goals, and emotional states?

The distinction matters enormously. A system that generates plausible dialogue is one thing. A system that infers and represents the psychological states behind the dialogue is something else — something that starts to look like it has a model of minds, possibly including its own. Andreas provided the empirical evidence, and it points clearly toward the richer reading.