Why “Everyone Dies” Gets AGI All Wrong

Why “Everyone Dies” Gets AGI All Wrong

Ben Goertzel critiques Yudkowsky and Soares's recent book claiming AGI will inevitably kill everyone, arguing this represents a decades-old doomsday narrative that hasn't changed despite Goertzel's extensive practical AGI development experience versus Yudkowsky's primarily theoretical warnings. Goertzel contends that while AGI development is advancing toward the 2029 timeframe, the deterministic doom scenario ignores the complexity of building safe systems in practice and overlooks how AGI capabilities might emerge from evolving AI technologies rather than from isolated theoretical frameworks.

Visit Original Article →

⌘K

Start typing to search...

Search across content, newsletters, and subscribers