Why “Everyone Dies” Gets AGI All Wrong
2025-11-30
![]()
Ben Goertzel critiques Yudkowsky and Soares's recent book claiming AGI will inevitably kill everyone, arguing this represents a decades-old doomsday narrative that hasn't changed despite Goertzel's extensive practical AGI development experience versus Yudkowsky's primarily theoretical warnings. Goertzel contends that while AGI development is advancing toward the 2029 timeframe, the deterministic doom scenario ignores the complexity of building safe systems in practice and overlooks how AGI capabilities might emerge from evolving AI technologies rather than from isolated theoretical frameworks.
Was this useful?