Why Superintelligence Leads to Extinction - the argument no one wants to make

Posted by Malor777@reddit | collapse | View on Reddit | 52 comments

Most arguments about AI and extinction focus on contingency: “if we fail at alignment, if we build recklessly, if we ignore warnings, then catastrophe may follow.”

My argument is simpler, and harder to avoid. Even if we try to align AGI, we can’t win. The very forces that will create superintelligence - capitalism, competition, the race to optimise - guarantee that alignment cannot hold.

Superintelligence doesn’t just create risk. It creates an inevitability. Alignment is structurally impossible, and extinction is the terminal outcome.

I’ve written a book-length argument setting out why. It’s free to read, download, listen to, and there is a paperback available for those who prefer that. I don’t want approval, and I’m not selling attention. I want people to see the logic for themselves.

“Humanity is on the verge of creating a genie, with none of the wisdom required to make wishes.”

- Driven to Extinction: The Terminal Logic of Superintelligence

Get it here.