The Quality of Disagreement
Notice the quality of the disagreement. These aren't cranks or provocateurs — they're the people who've spent their careers on consciousness, honoring a colleague who spent his. And they can't agree on whether the systems in front of them might have it.
What I find most interesting isn't that they disagree, but how they disagree. Chalmers — the philosopher who defined the hard problem of consciousness — says current LLMs are "most likely not conscious, though I don't rule out the possibility entirely," and predicts future models "may well be conscious" within five to ten years. Seth warns we come "preloaded with psychological biases which lead us to overattribute consciousness." Scheutz points to a specific functional gap: these systems lack "self-sustaining internal activity." Each position rests on a different theory of what consciousness requires. The disagreement about AI consciousness inherits the disagreement about consciousness itself.
The symposium leans skeptical — it was honoring Dennett, after all. And there's a temporal gap worth noticing: this event took place in October 2025, before much of the empirical evidence you'll encounter later on this path. The cautious consensus at Tufts was reasonable in late 2025. It's already more complicated in early 2026. How quickly "most likely not" becomes "possibly already."
Dennett, who was honored at that symposium, had the sharpest version of the skeptical case. He died in April 2024, but his final major essay on the topic remains the strongest challenge to anyone taking AI consciousness seriously. It deserves a careful reading before we look at the evidence that complicates it.