Could a Large Language Model be Conscious?
consciousnessphilosophylanguage-models
David Chalmers · 2023 · Paper · Intermediate
The canonical philosophical framework for thinking about LLM consciousness, from the philosopher who defined the hard problem. Chalmers identifies six obstacles (biology, senses, world models, recurrent processing, global workspace, unified agency), converts each into a research challenge, and notes: 'One person's barrage of objections is another person's research program.' Key move: the training/inference distinction — models are trained to predict text, but that doesn't mean their processing IS just text prediction, just as organisms 'trained' by evolution develop capacities beyond fitness maximization. Bottom line: <10% credence for current LLMs, >25% for sophisticated LLM+ systems within a decade. Theory-balanced rather than committed to any single account.