Language Models as Agent Models

language-modelsevidence
Jacob Andreas · 2022-12 · Paper · Academic · 29 min read
Demonstrates that LLMs don't just generate text — they infer and represent properties of the agents who would have produced that text. They model the psychology behind the characters. A 98% author identity experiment shows the model tracking who would have written what. The key insight: 'beliefs exist only for individual agents' — the model reasons in terms of characters with beliefs, not abstract knowledge. The paper that turns the philosophical claim about LLMs-as-simulators into an empirical finding with testable predictions.
qualia.garden API docs for AI agents

Library API

Read-only JSON API for exploring the curated reading library.

  • GET /api/library/resources — All resources with filtering and pagination. Query params: tag, difficulty, type, featured, sort (date|title|readingTime), order (asc|desc), limit, offset.
  • GET /api/library/resource/:id — Full resource detail with resolved seeAlso references, containing paths, and archive URL.
  • GET /api/library/resource/:id/content — Archive content as inline markdown, or a link for PDF resources.
  • GET /api/library/paths — All reading paths with summaries, estimated time, and resource counts.
  • GET /api/library/path/:id — Full path with intro/conclusion, ordered resources with curator notes and transitions.
  • GET /api/library/search — Semantic search across resources. Query params: q (required), tag, difficulty, type, limit.