10 August 2017

A reply to Thomas Metzinger’s BAAN thought experiment

This is a reply to Metzinger’s essay on Benevolent Artificial Anti-natalism (BAAN), which appeared on EDGE.org (7.8.2017). Metzinger invites us to consider a hypothetical scenario where smarter-than-human artificial intelligence (AI) is built with the goal of assisting us with ethical deliberation. Being superior to us in its understanding of how our own minds function, the…

Read more
21 July 2017

Uncertainty smoothes out differences in impact

Suppose you investigated two interventions A and B and came up with estimates for how much impact A and B will have. Your best guess is that A will spare a billion sentient beings from suffering, while B “only” spares a thousand beings. Now, should you actually believe that A is many orders of magnitude more effective than B?

Read more
17 July 2017

Arguments for and against moral advocacy

This post analyses key strategic questions on moral advocacy, such as: What does moral advocacy look like in practice? Which values should we spread, and how? How effective is moral advocacy compared to other interventions such as directly influencing new technologies? What are the most important arguments for and against focusing on moral advocacy?

Read more
30 June 2017

Strategic implications of AI scenarios

Efforts to mitigate the risks of advanced artificial intelligence may be a top priority for effective altruists. If this is true, what are the best means to shape AI? Should we write math-heavy papers on open technical questions, or opt for broader, non-technical interventions like values spreading?

Read more
26 June 2017

Tool use and intelligence: A conversation

This post is a discussion between Lukas Gloor and Tobias Baumann on the meaning of tool use and intelligence, which is relevant to our thinking about the future or (artificial) intelligence and the likelihood of AI scenarios.

Read more
20 June 2017

Training neural networks to detect suffering

Imagine a data set of images labeled “suffering” or “no suffering”. For instance, suppose the “suffering” category contains documentations of war atrocities or factory farms, and the “no suffering” category contains innocuous images – say, a library. We could then use a neural network or other machine learning algorithms to learn to detect suffering based on that data.

Read more
19 June 2017

Launching the FRI blog

We were moved by the many good reasons to make conversations public. At the same time, we felt the content we wanted to publish differed from the articles on our main site. Hence, we're happy to announce the launch of FRI’s new blog.

Read more

GET INVOLVED