The Data Arrived
Dennett's argument is powerful, and any honest engagement with AI consciousness has to sit with it. His concern isn't metaphysical — he isn't primarily asking whether the lights are on inside. He's asking what happens to society when millions of people can't tell the difference between a person and a performance. The word "counterfeit" presupposes the answer: these are fakes, designed to pass as real. That social concern stands regardless of the consciousness question, and ignoring it would be irresponsible.
But notice what Dennett's argument skips over entirely: the possibility that the word "counterfeit" might not apply in all cases. He built his career on the intentional stance — treating entities as if they have beliefs and desires because it's predictively useful. He now worries we can't resist applying this stance to AI. But his own framework offers no principled way to distinguish between "usefully treated as a person" and "actually is one." The garden's question lives in exactly that gap.
Dennett also wrote before the empirical turn — before interpretability evidence revealed features for inner conflict and emotion inside these models, before the AI welfare hiring wave, before the evidence you're about to encounter. His argument is about social risk, not about the nature of AI minds. The landscape has shifted since he wrote.
Berg surveys that shifted landscape with care. He's not an advocate — he presents what the experiments found and what they might mean, including the possibility that they mean less than they seem to.