Risks of Astronomical Future Suffering
Space colonization would likely increase rather than decrease total suffering. Because many people care nonetheless about humanity’s spread into the cosmos, we should reduce risks of astronomical future suffering without opposing others’ spacefaring dreams. In general, we recommend to focus on making sure that an intergalactic future will be good if it happens rather than making sure there will be such a future.
Suffering-Focused AI Safety: Why “Fail-Safe” Measures Might be Our Top Intervention
AI outcomes where something goes wrong may differ enormously in the amounts of suffering they contain. An approach that tries to avert the worst of those outcomes seems especially promising because it is currently more neglected than classical AI safety efforts which shoot for a highly specific, “best-case” outcome.
Gains from Trade through Compromise
When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.
Do Artificial Reinforcement-Learning Agents Matter Morally?
Artificial reinforcement learning (RL), a widely used training method in computer science, has striking parallels to reward and punishment learning in biological brains. Plausible theories of consciousness imply a non-zero probability that RL agents qualify as sentient and deserve our moral consideration, especially as AI research advances and RL agents become more sophisticated.