Open Research Questions

This page is out-of-date. You can read about our current priority areas here. Our research agenda on cooperation, conflict, and transformative artificial intelligence can be found here.


There are a number of crucial considerations for reducing suffering in humanity's future. This page presents a ranked list of topics that the Center on Long-Term Risk considers important to investigate. Let us know if you'd like to help research these topics. Some are most appropriately addressed by reviewing existing literature and summarizing it on Wikipedia. Other topics require novel exploration.

Top questions

Suffering from controlled vs. uncontrolled artificial intelligence

  • Priority: 10/10
  • Output format: Mostly novel research

It's likely that artificial intelligence (AI) in some form will hold the reigns of power over Earth's future within the coming centuries, barring economic or societal collapse in the interim. Depending on the dynamics of how AI is developed and how unpredictable AI behaviors are, humans may keep hands on the steering wheel of how AI is shaped, or AI might take a direction of its own due to economically outcompeting humans, oversights by its programmers, or other factors.

Organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute have explored the implications of various AI takeoff scenarios for human flourishing, but less attention has been given to the implications of various types of AIs for future suffering. It's plausible that some AI trajectories will cause significantly more suffering than others, but which ones?

Brian Tomasik has sketched some of his guesses about ways in which different types of AIs would cause suffering, but this is just a start. We need a thorough research program on this topic. Some relevant questions include:

  • How convergent are various types of computations for any type of advanced AI civilization? What fraction of its light cone would an AI devote to learning and other instrumental computations vs. what fraction would go toward creating the structures that it intrinsically values?
  • What kinds of computations are likely to be run only by human-controlled AI? By uncontrolled AI?
  • Would uncontrolled AIs use animal-like robots or lower-level nanotechnology to accomplish most of their engineering tasks? Would low-level nanotech suffer less than robots? What does this imply about the extent of instrumental suffering given uncontrolled AI?
  • Would AIs run lots of simulations for scientific purposes? How many computing resources would they require to achieve what level of accuracy?
  • Develop a taxonomy for AI types that's broader than just the distinction between controlled vs. uncontrolled. For example, human-controlled AI could mean AI where decisions are made democratically, in an authoritarian fashion, or by economic competition. Uncontrolled AI could include maximizers of something, minimizers, societies of many AI agents, and so on. Each of these more detailed AI scenarios may involve different levels of expected suffering.

We should also explore whether there are particular forms of AI-safety research that are more targeted relative to the value of suffering reduction. For instance, are there ways we can ensure that even if AIs fail to achieve human goals, they at least "fail safe" and don't cause astronomical amounts of suffering? And even if suffering reducers don't support AI safety wholesale (which, as mentioned, seems unlikely), are there particular components of AI safety that they would support and should promote further?

Suffering-focused ethics

  • Priority: 8/10

Many views in ethics and value theory see preventing suffering as particularly important. Such views include Negative Utilitarianism but also other views in population ethics, axiology, and normative ethics. New research in this vein or presentations of such views to a general audience can build on the works we list in our bibliography. Below are examples of more specific topics.

  • Overview of suffering-focused views.
    • Create a bibliography of suffering-focused views. Priority 8/10 (more).
    • Improve this Wikipedia article on negative Utilitarianism, and this one on negative consequentialism. Priority 8/10.
  • Antifrustrationism.
    • Christoph Fehige proposed antifrustrationism, according to which a frustrated preference is bad, but the existence of a satisfied preference is not better than if the preference didn’t exist in the first place. Several authors have objected to antifrustrationism. How could a proponent of antifrustrationism respond? Priority 8/10 (more).
    • Improve this Wikipedia article on antifrustrationism. Priority 8/10.
  • Descriptive ethics. What fraction of people hold suffering-focused views? What are people's opinions on various negative-leaning thought experiments like Omelas? See also the essay Descriptive Ethics and Its Relevance for Cause Prioritization. Priority 8/10.
  • Tradeoffs between good and bad parts of lives. In discussions about the disvalue of bad parts of life compared to the value of good parts of life, one idea that comes up is what tradeoffs someone makes or would make. A person might say “I would accept 1 day of torture in exchange for living 10 extra happy years.” What, if anything, can be concluded from the actual or hypothetical tradeoffs people make? Priority 8/10 (more).
  • Applications. What are interesting practical implications of suffering-focused views? An example is the essay Omelas and Space Colonization. Priority 8/10.
  • More research questions on suffering-focused ethics.

AI takeoff scenarios

  • Priority: 6/10
  • Output format: Wikipedia contributions and novel research

Futurists debate what AI will look like when it arrives. Some like Eliezer Yudkowsky and Nick Bostrom have argued in favor of the possibility of a "hard takeoff" in which a single AI or small team of AI creators can rapidly self-improve to the point of unilaterally taking over the world. Others, like Robin Hanson and J. Storrs Hall, have argued for a "soft takeoff" in which AI is integrated into society as a whole, and the rapid self-improvement occurs in a similar way as the exponential economic growth that we see already. Another possibility is AI arms races among several powerful countries, in which militaries aim to outcompete each other in fashion reminiscent of the Cold War.

  • It would help to develop a taxonomy of AI trajectories more fine-grained than the hard-vs.-soft distinction.
  • What can the study of economic growth tell us about AI takeoff dynamics?
  • Even if we think a soft takeoff is most likely, how probable is a hard takeoff? Should we expend resources thinking about those possibilities?
  • Who will control development of the first general AI? US military? Chinese military? Google? Wealthy investors? Private individuals?
  • Will whole-brain emulation or bottom-up AI come first? Will it use neuromorphic algorithms or more abstract ones? Will it use evolutionary algorithms or intelligent design? Will it be neat or scruffy or both? And so on.
  • What fraction of AIs would be maximizers of something and what fraction minimizers? What fraction would be neither? What kinds of goal functions are likely?
  • What forces will determine how the AI is shaped? Democratic vote? Financial incentives? Opinions of wealthy investors? Scientists?
  • Given the above, how can we best influence AI in positive directions? Spreading good values throughout society? Networking with tech leaders? Influencing the US military?
  • Is the work of the Machine Intelligence Research Institute on the right track, or is it too theoretical?
  • How likely is it that elites would figure out AI safety on their own?
  • Would open-source AI development increase or decrease the probability that humans retain control of the AI's behavior? (Arguments for increase: 1. More eyes would be checking the code, searching for problems. 2. AI-control researchers would be able to reason better about their topic if they could see AI source code rather than guessing about what was happening in the secret offices of a company or government agency. Arguments for decrease: 1. Non-experts would be able to run AIs without the same safeguards that might be developed by private AI teams. 2. AIs could be downloaded and run by people with malicious intent. 3. Open-sourcing AI development might hasten its arrival, allowing less time to think about control issues. 4. Open-sourcing would allow more total parties to have powerful AIs, potentially making worldwide cooperation more difficult and increasing the risk of conflict scenarios.) See also "Should AI Be Open?"

Anthropic reasoning and mediocrity

  • Priority: 4/10
  • Output format: Wikipedia contributions and novel research

Anthropic reasoning aims to gain insight about our place in the universe based on the facts that we exist and find ourselves in a particular time and context. As an example, it's sometimes claimed that human civilization is unlikely to last vastly longer than it has already, because if we consider ourselves a random sample from all humans, we would expect to have been born much later in history. This is called the "doomsday argument" and is one controversial application of anthropic argumentation. Many thinkers reject the doomsday argument, though they differ widely on the reasons for its rejection. Some argue that a narrow reference class of observers can solve the problem. Others suggest giving higher a priori probability to scenarios with more total observers. Yet others propose eliminating the notion of discrete observers within a reference class altogether. In general, the best approach to anthropics has not been "solved".

One example of anthropic-type thinking is the principle of mediocrity -- a Copernican intuition that we should expect ourselves to be typical observers in the universe. This idea seems at odds with the fact that we appear to be in an extremely influential time in the history of our galaxy. We live during some of the generations that may create and determine the constitution of AIs that colonize our region of the universe. What does anthropics have to say about this? Should we think the far future is much less likely to happen than we naively would have believed?

Anthropic-type ideas like the Fermi paradox, Great Filter, and timeline for evolution of life on Earth can provide further suggestions about how hard superintelligence is and how it behaves once created.

Other topics

Wild-animal suffering

Future evolution

  • How much will control of the future be determined by Darwinian forces rather than deliberate design?
  • Which factors will most determine power in the future? Economic growth? Scientific progress? Nanotech? Whoever builds the first general AI?
  • What would be the future of compassion under various types of evolution?
  • How likely is a world government / singleton? What types might it take on (dictatorship, world democracy, plutocracy, international federation, etc.) with what probabilities?
  • What are ways we can shape this evolution in positive directions, or should we focus our altruistic efforts on making a difference given the scenarios (like a singleton) where evolution doesn't happen as much?
  • Even if a singleton forms on Earth, will it be possible to maintain complete goal alignment as agents spread out into distant galaxies between which communication is extremely slow? If not, will this inevitably lead to a multipolar, conflict-prone outcome?

Trajectory changes

  • What are the most path-dependent aspects of how the future develops that we can tug on, and which are relatively inflexible to modification?
  • How much entropy, butterfly effects, and so on we should expect for our actions? How likely are our values to be encapsulated into the far future by goal preservation and how likely are they to pass away as one more stage of history?
  • Are there better ways to improve values than by direct persuasion and activism? For example:
    • Robin Hanson argues that wealth and geography differences may explain many world value differences.
    • "The Germ Theory of Democracy, Dictatorship, and All Your Most Cherished Beliefs" argues that geographic differences in disease may explain many value differences.
    • Some believe that technologies like in vitro meat might reduce speciesism more dramatically than veg outreach because if in vitro meat existed, carnivores would need less cognitive justification for eating meat.
    • Whether brain emulations come before or after from-scratch AI, and what the dynamics of these takeoffs are, could matter enormously for the future's values.

    The following questions are from Nick Beckstead's slides, "How to Compare Broad and Targeted Attempts to Shape the Far Future" (pp. 35-36):

    • Is there a common set of broad factors which, if we push on them, systematically lead to better futures? [...]
    • Does the future depend on how humanity handles a small number of challenges? Can we tell what they are right now? Can we tell what to do about them? Could further research illuminate these questions?
    • In history, how often did big wins (and failures) come from people addressing challenges that humanity would face in the distant future? Do the big wins and failures have common features? How often did people try this?
    • In history, how often did big wins (and failures) come from people improving humanity's ability to address future challenges in general? Do the big wins and failures have common features? How often did people try this?

International cooperation

  • What global institutions, practices, and cultural outlooks would best encourage cooperative AI development, allowing for compromise and safety measures against suffering?
  • What factors encourage companies and countries to engage in risk-taking behavior and arms races? How can we best avert these tendencies? Historical examples of escalation vs. détente during the Cold War would be relevant.
  • Examine the historical literature on international cooperation to constrain dangerous technologies (chemical/biological/nuclear-weapon prohibitions and disarmament, pollution/CO2 emissions, etc.). What does this suggest about prospects for cooperation to circumspectly develop nanotech, robotics, and strong AI?
  • How can we work toward global governance, including a shared world military / police? In the short term, how could we improve collective security, like the UN Security Council? How can we strengthen international law more generally?
  • Combating the military-industrial complex seems like one of the few very clear policy stances in the realm of international relations. Are there leveraged approaches for doing this? Intuitively we would guess that returns from work in this area would be small because the defense lobby is so strong, given that hundreds of billions of dollars per year are at stake for defense companies.
  • How can the laws of privacy change to accommodate the kind of surveillance necessary to control rogue AI/nano developers? AI/nano weapons are much easier to build in secret than nuclear weapons. What kinds of rules/norms/principles could be established for international inspections? How can we avoid authoritarian, abusive, or prurient surveillance in the process?

Suffering in physics

For an introduction to the topic, see "Is There Suffering in Fundamental Physics?".

  • To what extent should we see consciousness-like operations in physics? How much do we care about them?
  • Are there positive-sum ways to reduce suffering in physics without impinging too much on other values?
  • How sensitive are our consciousness assessments to fundamental questions in metaphysics, string theory, etc.? For instance, if our ontology changes, do our consciousness assessments also change radically?
  • If, as it seems, fundamental physics consists of very abstract mathematical structures, can we assess their sentience by analogy with more familiar macroscopic brain processing? Or do we need other ways of directly attributing consciousness to symmetry groups, high-dimensional manifolds, Hilbert space, etc.? This partly depends on our ontological perspective on such mathematical structures -- whether they are the universe or just abstractly describe the universe, since if our mathematical tools merely describe an independently existing universe, perhaps there are other mathematical tools that could make the same predictions, and it would be undesirable for our sentience valuations to depend on which mathematical description we use.

Extraterrestrial life

  • What's the probability that if humans don't colonize their future light cone, some other civilization would do so before stars die out?
  • How likely is it that extraterrestrial entities are sentient? One way to begin exploring this is to ask whether consciousness evolved more than once on Earth.
  • How convergent is human-style compassion for the suffering of others, including other species? Is this an incidental spandrel of human evolution, due to mirror neurons and long infant-development durations requiring lots of parental nurturing? Or will most high-functioning, reciprocally trading civilizations show a similar trend?
  • Are humans more or less empathetic/peaceful than the average extraterrestrial civilization? On the one hand, we see that many animals below mammals and birds seem to have little pure altruism toward others, suggesting that humans may be above average in empathy. On the other hand, because human males can have vast numbers of children, there are incentives for aggression and risk-taking to dominate other males, and society would be more peaceful if males had a higher cost to bear children so that they would be more risk-averse. And there are more radically different possible reproduction paradigms. Consider an ant colony, where most individuals never reproduce and are quite cooperative with one another. Of course, ants can still be very aggressive against outsiders.
  • Even human society could be a lot more violent than it is, but how much is our relative peacefulness an intrinsic property of having a high-functioning economically successful civilization and how much is due to our path-dependent history? Note that there are non-democracies where technological sophistication is still substantial.
  • How likely are we to encounter ETs of various types before we go extinct or become permanently closed off from them due to the expansion of the universe?

Epistemology

  • Are there other major ways of looking at reality that would imply radically different conclusions for action? How likely are these other views?
  • How do we deal with model uncertainty over our most basic assumptions?

Moral psychology

  • What properties of minds (emotion, agency, intelligence, etc.) lead to moral concern for those minds? "The Phenomenal Stance Revisited" is a great illustration of research like this, and it links to several related studies.
  • We see extensive moral disagreements today. What are the prospects for these to decrease (or increase?) as education and multiculturalism become more widespread?
  • What are the psychological, neurological, and developmental differences between positive-leaning vs. negative-leaning utilitarians? Among utilitarians vs. deontologists vs. virtue ethicists? Between atheists and religious adherents? How much of religious morality is driven by factual belief vs. emotional intuitions? Work like that by Joshua Greene would be relevant here. Like Socrates, we could help people better figure out what they themselves believe.
  • What types of preference idealization do different people find acceptable? What are the prospects for future people agreeing to something like coherent extrapolated volition (CEV), and how many plausible variations of CEV are there?
  • What factors contribute to nationalism and ethnic prejudice? These seem to have been some of the biggest sources of violence in the past and may be some of the biggest obstacles to global governance in the future.

Strategy

  • What are the implications of findings in the above areas for what altruists should work on?
  • Which of these questions can we "hand off" to future generations, and which do we need to investigate now to decide how to proceed?
  • Which altruistic actions are less sensitive to the details of how these findings come out, i.e., more robust across a broad range of possible futures?

Choose your own topic

  • There are plenty of important areas not listed and some that we probably haven't even thought to explore.