Talking About Large Language Models
language-modelsphilosophy
Murray Shanahan · 2024 · Paper · Accessible · 38 min read
How our vocabulary — believes, wants, understands — carries implicit commitments about consciousness that may not apply to LLMs. When we say a model 'knows' something, we smuggle in assumptions about mental states. Shanahan distinguishes literal, metaphorical, and somewhere-in-between uses, and argues that carelessness with language makes the consciousness question harder to think about clearly. Every sentence about AI behavior is a small philosophical decision. Essential reading for anyone writing about AI minds — the first step is noticing what your words already assume.