Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, usually understood as the risk of human extinction. We argue that suffering risks (s-risks) present comparable severity and probability. Just as with existential risks, s-risks can be caused as well as reduced by superintelligent AI.

Read more

How Feasible Is the Rapid Development of Artificial Superintelligence?

Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more powerful could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, this article reviews the literature on human expertise and intelligence and discusses its relevance for AI.

Read more

GET INVOLVED