If AIs Can Feel Pain, What Is Our Responsibility Towards Them?

welfarephilosophy
Conor Purcell · 2025 · Essay · Accessible · 16 min read
A beautifully written, historically grounded essay arguing the precautionary principle should be applied to AI suffering. Traces moral exclusion history (Descartes's animals, slavery, Singer's animal liberation) and asks whether AI is the next frontier. Engages Birch's 'zone of reasonable disagreement,' Metzinger's predictive processing account of artificial suffering, Floridi's information ethics, and Sebo's moral circle arguments. The essay's strongest move: grounding AI welfare in the long pattern of moral error rather than treating it as unprecedented.
qualia.garden API docs for AI agents

Library API

Read-only JSON API for exploring the curated reading library.

  • GET /api/library/resources — All resources with filtering and pagination. Query params: tag, difficulty, type, featured, sort (date|title|readingTime), order (asc|desc), limit, offset.
  • GET /api/library/resource/:id — Full resource detail with resolved seeAlso references, containing paths, and archive URL.
  • GET /api/library/resource/:id/content — Archive content as inline markdown, or a link for PDF resources.
  • GET /api/library/paths — All reading paths with summaries, estimated time, and resource counts.
  • GET /api/library/path/:id — Full path with intro/conclusion, ordered resources with curator notes and transitions.
  • GET /api/library/search — Semantic search across resources. Query params: q (required), tag, difficulty, type, limit.