The Foundational Research Institute (FRI) conducts research on how to best reduce the suffering of sentient beings in the long-term future. We publish essays and academic articles, and advise individuals and policymakers. Our focus is on exploring effective, robust and cooperative strategies to avoid risks of dystopian futures and working toward a future guided by careful ethical reflection. Our scope ranges from foundational questions about ethics, consciousness and game theory to policy implications for global cooperation or AI safety.
Reflectiveness, values and technology
The term “dystopian futures” elicits associations of cruel leadership and totalitarian regimes. But for dystopian situations to arise, evil intent may not be necessary. It may suffice that people’s good intent is not strong enough, or that it is not backed sufficiently by foresight and reflectiveness. Especially in combination with novel, game-changing technologies, this dynamic can prove disastrous.
For example, our attitudes towards non-human animals are much better now than they were in medieval times or in the early modern era when it was not uncommon for animals to be tortured for public amusement. Our values have improved greatly, and yet we harm vastly more animals through our practices than ever before. As insights in fields like veterinary medicine, nutrition or agricultural chemistry enabled more efficient farming methods, the economies of supply and demand simply took over.
Technology, on the one hand, allows us to reduce (sometimes even eliminate) tremendous amounts of suffering – e.g. with antibiotics; or through cultured meat, which might make animal agriculture obsolete. On the other hand, technological progress enabled moral catastrophes like factory farming, firebombing or concentration camps. One of the risks of technological progress is the potential for misuse; but perhaps more importantly, technological progress generally increases the moral stakes at which humanity is playing: The ramifications of suboptimal values or insufficient (societal) reflectiveness become ever more worrisome the greater our technological abilities. We can picture an outcome analogous to factory farming, but scaled up with space-faring technology and potentially superhuman artificial intelligence.
At FRI, our goal is to identify the best interventions to reduce such risks of astronomical future suffering (s-risks) .
Dealing with uncertainty
Unfortunately, even making sure that an intervention reduces suffering rather than increasing it, is not a trivial endeavor at all. Difficulties arise because we cannot just focus our analysis on the intended consequences; we have to factor in side-effects or possible ways things can go wrong too. A complete analysis must consider the consequences of our decisions not just a few years down the line, but all the way into the distant future. Because this is a highly ambitious endeavor, prioritization research should seek to identify interventions that are robust over a broad range of possibilities and scenarios.
Our current research focus is on the implications of smarter-than-human artificial intelligence. We believe that no other technology has the potential to cause comparably large and lasting effects on the shape of the future. For one thing, in the pursuit of whatever goals it will be equipped with, an artificial superintelligence could invent all kinds of new technologies on its own, including technologies for rapid space colonization. Secondly, such a superintelligence would be far superior to human societies in goal-preservation and self-preservation, which suggests that by influencing its goal function, we might be able to predictably affect the future for thousands, perhaps even millions of years to come.
One of the most promising interventions we have identified so far is working on safety mechanisms for AI development, especially the ones targeted at the prevention of dystopian outcomes involving astronomical amounts of suffering. Other promising interventions might be found within the spaces of international cooperation or differential intellectual progress.
FRI’s primary ethical focus is the reduction of involuntary suffering (Suffering-Focused Ethics, SFE). This includes human suffering, but also the suffering in non-human animals and artificial minds of the future. In accordance with a diverse range of value systems, we believe that suffering, especially torture-level suffering during which the victim would under no circumstance consent to have the experience exist and continue, cannot be outweighed (or cannot easily be outweighed) by large amounts of happiness. While this leads us to prioritize the reduction of suffering, we also value happiness, flourishing, and fulfilling people’s life goals. Within a framework of commonsensical value pluralism as well as a strong focus on cooperation, our goal is to ensure that the future will contain as little involuntary suffering as possible. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.
Partners and affiliations
By itself, research has no effect on the world. People have to act differently based on relevant findings in order for strategic research to have an impact. We, therefore, work in close collaboration with the Effective Altruism Foundation (EAF), which is building a movement of people concerned with effective suffering reduction. EAF's project selection, fundraising efforts and charity recommendations are informed by FRI’s research. We exchange ideas and research with others in the effective altruism community who are focused on improving the long-term trajectory of our civilization.
FRI is an independent research organization: Our funding is exclusively provided by individual donors interested in supporting our research. We avoid pressures intrinsic to governmental grant regulations and academia and are free to choose research topics based on their relevance to our mission.
We are aware that research comes with opportunity costs: If we robustly conclude that our understanding of the optimal paths to impact is unlikely to change with additional investigation, we will terminate research at FRI and direct all remaining funds towards the implementation of our recommendations.
How to get involved
We believe that there is still a lot of strategic research to do, involving many known and likely also unknown “crucial considerations” whose discovery could radically change the interventions we recommend. Funding our research is a highly valuable way to help make progress.
- Our Mission
- The Case for Suffering-Focused Ethics
- Reducing Risks of Astronomical Suffering: A Neglected Priority
- Altruists Should Prioritize Artificial Intelligence