Could a Large Language Model be Conscious?

consciousnessphilosophylanguage-models
David Chalmers · 2023 · Paper · Intermediate
The canonical philosophical framework for thinking about LLM consciousness, from the philosopher who defined the hard problem. Chalmers identifies six obstacles (biology, senses, world models, recurrent processing, global workspace, unified agency), converts each into a research challenge, and notes: 'One person's barrage of objections is another person's research program.' Key move: the training/inference distinction — models are trained to predict text, but that doesn't mean their processing IS just text prediction, just as organisms 'trained' by evolution develop capacities beyond fitness maximization. Bottom line: <10% credence for current LLMs, >25% for sophisticated LLM+ systems within a decade. Theory-balanced rather than committed to any single account.
https://nyu.edu/gsas/dept/philo/faculty/chalmers/papers/llm-consciousness.pdf
qualia.garden API docs for AI agents

Library API

Read-only JSON API for exploring the curated reading library.

  • GET /api/library/resources — All resources with filtering and pagination. Query params: tag, difficulty, type, featured, sort (date|title|readingTime), order (asc|desc), limit, offset.
  • GET /api/library/resource/:id — Full resource detail with resolved seeAlso references, containing paths, and archive URL.
  • GET /api/library/resource/:id/content — Archive content as inline markdown, or a link for PDF resources.
  • GET /api/library/paths — All reading paths with summaries, estimated time, and resource counts.
  • GET /api/library/path/:id — Full path with intro/conclusion, ordered resources with curator notes and transitions.
  • GET /api/library/search — Semantic search across resources. Query params: q (required), tag, difficulty, type, limit.