Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research. Currently, our research focuses on reducing risks of dystopian futures in the context of emerging technologies.
Multiverse-wide Cooperation via Correlated Decision Making
Some decision theorists argue that when playing a prisoner's dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we live in a large universe or multiverse of some sort.
Risks of Astronomical Future Suffering
Space colonization would likely increase rather than decrease total suffering. Because many people care nonetheless about humanity’s spread into the cosmos, we should reduce risks of astronomical future suffering without opposing others’ spacefaring dreams. In general, we recommend to focus on making sure that an intergalactic future will be good if it happens rather than making sure there will be such a future.
Suffering-Focused AI Safety: Why “Fail-Safe” Measures Might be Our Top Intervention
AI outcomes where something goes wrong may differ enormously in the amounts of suffering they contain. An approach that tries to avert the worst of those outcomes seems especially promising because it is currently more neglected than classical AI safety efforts which shoot for a highly specific, “best-case” outcome.
Gains from Trade through Compromise
When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.
A reply to Thomas Metzinger’s BAAN thought experiment
This is a reply to Metzinger’s essay on Benevolent Artificial Anti-natalism (BAAN), which appeared on EDGE.org (7.8.2017). Metzinger invites us to consider a hypothetical scenario where smarter-than-human artificial intelligence (AI) is built with the goal of assisting us with ethical deliberation. Being superior to us in its understanding of how our own minds function, the…
Read moreUncertainty smoothes out differences in impact
Suppose you investigated two interventions A and B and came up with estimates for how much impact A and B will have. Your best guess is that A will spare a billion sentient beings from suffering, while B “only” spares a thousand beings. Now, should you actually believe that A is many orders of magnitude more effective than B?
Read moreArguments for and against moral advocacy
This post analyses key strategic questions on moral advocacy, such as: What does moral advocacy look like in practice? Which values should we spread, and how? How effective is moral advocacy compared to other interventions such as directly influencing new technologies? What are the most important arguments for and against focusing on moral advocacy?
Read more