Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research. Currently, our research focuses on reducing risks of dystopian futures in the context of emerging technologies. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.

Gains from Trade through Compromise
When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.





Superintelligence as a Cause or Cure for Risks of Astronomical Suffering
Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, usually understood as the risk of human extinction. We argue that suffering risks (s-risks) present comparable severity and probability. Just as with existential risks, s-risks can be caused as well as reduced by superintelligent AI.





Suffering-Focused AI Safety: In Favor of “Fail-Safe” Measures
AI outcomes where something goes wrong may differ enormously in the amounts of suffering they contain. An approach that tries to avert the worst of those outcomes seems especially promising because it is currently more neglected than classical AI safety efforts which shoot for a highly specific, “best-case” outcome.





Multiverse-wide Cooperation via Correlated Decision Making
Some decision theorists argue that when playing a prisoner's dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we live in a large universe or multiverse of some sort.




Risk factors for s-risks
Traditional disaster risk prevention has a concept of risk factors. These factors are not risks in and of themselves, but they increase either the probability or the magnitude of a risk. For instance, inadequate governance structures do not cause a specific disaster, but if a disaster strikes it may impede an effective response, thus increasing the damage. Rather than considering individual scenarios of how s-risks could occur, which tends to be highly speculative, this post instead looks at risk factors – i.e. factors that would make s-risks more likely or more severe.
Read moreChallenges to implementing surrogate goals
Surrogate goals might be one of the most promising approaches to reduce (the disvalue resulting from) threats. The idea is to add to one’s current goals a surrogate goal that one did not initially care about, hoping that any potential threats will target this surrogate goal rather than what one initially cared about. In this post, I will outline two key obstacles to a successful implementation of surrogate goals.
Read moreA framework for thinking about AI timescales
To steer the development of powerful AI in beneficial directions, we need an accurate understanding of how the transition to a world with powerful AI systems will unfold. A key question is how long such a transition (or “takeoff”) will take.
Read more