A Field Guide to AI Safety

safetyphilosophy
Kelsey Piper · 2023-06 · Essay · Accessible · 18 min read
The best single introduction to AI safety for a general audience. Maps three major worldviews — Yudkowskian intelligence explosion (99% doom, stop building), Open Philanthropy sleepwalking (incremental alignment, interpretability, deployment decisions matter), and the optimistic view (alignment solves itself with commercial viability). The critical insight: the concern is old and mainstream; the field trying to solve it was until recently fringe, which explains the internal chaos. Pre-paradigmatic field, ~100-1000 people working at cross-purposes. Closing argument: 'When there is this much uncertainty, high-stakes decisions shouldn't be made unilaterally by whoever gets there first.' Essential context for anyone entering the consciousness conversation — you can't understand what an AI is without understanding how it was made.
qualia.garden API docs for AI agents

Library API

Read-only JSON API for exploring the curated reading library.

  • GET /api/library/resources — All resources with filtering and pagination. Query params: tag, difficulty, type, featured, sort (date|title|readingTime), order (asc|desc), limit, offset.
  • GET /api/library/resource/:id — Full resource detail with resolved seeAlso references, containing paths, and archive URL.
  • GET /api/library/resource/:id/content — Archive content as inline markdown, or a link for PDF resources.
  • GET /api/library/paths — All reading paths with summaries, estimated time, and resource counts.
  • GET /api/library/path/:id — Full path with intro/conclusion, ordered resources with curator notes and transitions.
  • GET /api/library/search — Semantic search across resources. Query params: q (required), tag, difficulty, type, limit.