Reasons to Be Nice to Other Value Systems

First written: 16 Jan. 2014; last update: 17 Oct. 2017

I suggest several arguments in support of the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation, but there are some cases where helping others' moral views is clearly beneficial for us.

Introduction

A basic premise of economic policy, business strategy, and effective altruism is to choose the option with highest value per dollar. Ordinarily this simple rule suffices because we're engaged in one-player games against the environment. For instance, if Program #1 to distribute bed nets saves twice as many lives per dollar as Program #2, we choose Program #1. If Website B has 25% longer dwell time than Website A, we choose Website B. These are essentially engineering problems where one option is better for us, and no other agent else feels differently.

However, this mindset can run into trouble in social situations involving more than one player. I'll illustrate with a toy example that avoids naming specific groups, but the general structure transfers to many real-world cases.

Example: Altruism tabling

Suppose there's an Effective Altruism Fair at your local university, and altruists from various ideological stripes will be hosting the event and presenting their individual work. You really care about promoting Emacs, the one true text editor. However, the Fair will also host a booth for the advocates of the Vi editor, which you consider not just inferior but actively harmful to the world.

The Fair requires some general organizing help -- to publicize, set up tables, and provide refreshments. Beyond that, it's up to the individual groups to showcase their own work to the visitors. Your Emacs club is deciding: How much effort should we put into helping out with general organizing, and how much should we devote to making our individual booth really awesome? You might evaluate this on the metric of how many email signups you'd get per hour of preparation work. And while you appreciate some things the Vi crowd does, you think they cause net harm on balance, so you might subtract off from your utility 1/2 times the number of email signups your effort allows them to get per hour.

If you help out with the general logistics of the Fair, it would produce a lot of new visitors, but only some fraction of them will be interested in Emacs. Say that every hour you put in provides 10 new Emacs signups, as well as 10 new Vi signups (plus maybe signups to other groups that are irrelevant to you). The overall value of this to you is only 10 - (1/2)*10 = 5. In contrast, if you optimize your own booth, you can snatch an extra 15 visitors to yourself, with no extra Vi visitors in the process. Since 15 > 5, cost-effectiveness analysis says you should optimize only your booth. After all, this is the more efficient allocation of resources, right?

Suppose the Vi team faces the same cost-benefit tradeoffs. Then depending on which decisions each team makes, the following are the possible numbers of signups that each side will get, written in the format (# of Emacs signups), (# of Vi signups).

Total numbers of email signups

Vi help on logistics Vi focus on own booth
Emacs help on logistics 10+10 = 20,    10+10 = 20 10+0 = 10,    10+15 = 25
Emacs focus on own booth 15+10 = 25,    0+10 = 10 15+0 = 15,    0+15 = 15

Now remember that Emacs supporters consider Vi harmful, so that Emacs utility = (number of Emacs signups) - (1/2)*(number of Vi signups). Suppose the Vi side feels exactly the same way in reverse. Then the actual utility values for each side, computed based on the above table, will be

Utility values

Vi help on logistics Vi focus on own booth
Emacs help on logistics 10,    10 -2.5,    20
Emacs focus on own booth 20,    -2.5 7.5,    7.5

Just as we saw in the naive cost-effectiveness calculation, there's an advantage of 20 - 10 = 7.5 - (-2.5) = 10 to focusing on your own booth, regardless of what the other team does.

The game that this table represents is a prisoner's dilemma (PD) -- arguably the most famous in game theory. The dominant strategy in a one-shot PD is to defect, and this is what our naive calculation was capturing. In fact, both of the above tables are PDs, so the PD structure would have applied even absent enmity between the text-editor camps. PDs show up in very many real-world situations.

There's debate on whether defection in a one-shot PD is rational, but what is clear is that most of the world does not consist in one-shot PDs. For instance, what if the EA Fair is held again next year? How will the Vi team react then if you defect this year?

In addition, it may be in all of our interests to structure society in ways that prevent games from turning into one-shot PDs, because the outcome is worse for both sides than cooperation would have been, if only it could have been arranged.

Reasons to be nice

In the remainder of this piece I outline several weak arguments why we should generally try to help other value systems, even when we don't agree with them. Here's my general heuristic:

If you have an opportunity to significantly help other value systems at small cost to yourself, you should do so.

Likewise, if you have opportunity to avoid causing significant harm to other value systems by foregoing small benefit to yourself, you should do so. This is more true the more powerful is the value system you're helping. That said, if groups championing the other value system are defecting against you, then stop helping it.

Iterated prisoner's dilemmas

Most of life has multiple rounds. Other groups of people generally don't go away after we've stepped on their toes, and if we defect now, they can defect on us in future interactions. There's extensive literature on the iterated prisoner's dilemma (IPD), but the general finding is that it tends to yield cooperation, especially over long time horizons without a definite end point. The Evolution of Cooperation is an important book on this subject.

Evolved emotions

One can debate whether a given situation adequately fits the properties of being a pure IPD. The translation from real-world situations to theoretical games is always messy. Regardless, the fact remains that empirically, humans feel reciprocal gratitude and indebtedness to those who helped them.

What's more, these feelings often persist even when there's no obvious further benefit from doing so. Emotions are humans' ways of making credible commitments, and the fact that humans feel loyalty and duty means that they can generally be trusted to reciprocate.

Of course, if you interact with people who are conniving and tend to backstab, then don't help them. Being nice does not mean being a sucker, and indeed, continuing to assist those who just take for themselves only encourages predation. (Of course, evolution has produced emotional exceptions to this, like in the case of altruism towards children and family members who share DNA with you, even if they never reciprocate.)

Reputation

Reciprocal altruism typically occurs between individuals or groups, but there are also broader ways in which society transmits information about how generous someone is toward other values. When others discuss your work, wouldn't you rather have them say that you're a fair-minded and charitable individual who helps many different value systems, even those she doesn't agree with?

Common sense

The heuristic of helping others when it's cheap to do so strikes most people as common sense. These values are taught in kindergarten and children's books.

Norms and universal rules

Mahatma Gandhi said: "If we could change ourselves, the tendencies in the world would also change." We can see this idea expressed in other forms, such as the categorical imperative or the dictums of a rule utilitarian. Society would be better -- even according to your own particular values -- if everyone followed the rule of helping other value systems when doing so had low cost.

When we follow and believe in these principles, it rubs off on others. Collectively it helps reinforce a norm of cooperation with those who feel differently from ourselves. Norms have significant social power, both for individuals and even for nations. For instance:

Internationally, a cooperative security norm, if close to universality, can become the defining standard for how a good international citizen should behave. It is striking how in the 1980s and 1990s scores of formerly reluctant states were flocking to [Nonproliferation Treaty] NPT membership, notably after change in the national system of rule and particularly in the course of democratization processes: turning unequivocally nonnuclear or confirming nonnuclear status became the "right thing to do" (Rublee 2009; Müller and Schmidt 2010). [p. 4]

When we defect in any particular situation, we weaken cooperative norms for everyone for many future situations to come.

Encouraging global cooperation

Norms of mutual assistance and tolerance among different groups are important not just for our own projects but also for international peace on a larger scale. To be sure, the contribution of our individual actions to this goal are miniscule, but the stakes are also high. A globally cooperative future could contain significantly less suffering and more of what other people value in expectation.

Utilitarianism

Utilitarians care about the well-being or preference satisfaction of others. Thus, if many people feel that something is wrong, even if you don't, there's a utilitarian cost to it. This argument is stronger for preference utilitarians who value people's preferences about the external world even when they aren't consciously aware of violations of those preferences. Of course, this alone is probably not enough to encourage nice behavior, because present-day humans are vastly outweighed in direct value by non-human animals and future generations.

Moral uncertainty

If you had grown up with different genes and environmental circumstances, you would have held the moral values that others espouse. In addition, you yourself might actually, not just hypothetically, later come to share those views -- due to new arguments, updated information, future life experiences, accretion of wisdom, or social influence. Or you might have come to hold those views if only you had heard arguments or learned things that you will not actually discover. What others believe provides some evidence for what an idealized version of you would believe. If so, then you might be mistaken that others' moral values are worthless in your estimation.

I should clarify that the value of cooperation does not rely on moral uncertainty; the other arguments are strong enough on their own. Moral uncertainty just provides some additional oomph, depending on how strongly it motivates you. (And you may want to apply some meta-level uncertainty on how much you care about moral uncertainty, if you care about meta-level uncertainty.)

Superrationality

This section was written by Caspar Oesterheld.

Some decision theorists have argued that cooperation in a one-shot PD is justified if we face an opponent that uses a similar decision-making procedure as we do. After all, if we cooperate in such a PD, then our opponent is likely to do the same. Hofstadter (1983) calls this idea superrationality.

Some have used superrationality to argue that it is in our self-interest to be nice to other humans (Leslie 1991, sec. 8; Drescher 2006, ch. 7). For example, if I save a stranger from drowning, this makes it more likely that others will make a similar decision when I need help. However, in practice it seems that most people are not sufficiently similar to each other for this reasoning to apply in most situations. In fact, you may already know what other people think about when they decide whether to pull someone out of the water and that this is uncorrelated with your thoughts on superrationality. Thus, it is unclear whether superrationality has strong implications for how one should deal with other humans (Oesterheld 2017, sec. 6.6; Almond 2010a, sec. 4.6; Almond 2010b, sec. 1; Ahmed 2014, ch. 4).

However, even if Earth doesn’t harbor agents that are sufficiently similar to me, the multiverse as a whole probably does. In particular, it may contain a large set of agents who think about decision theory exactly like I do but have different values. Some of these will also care about what happens on Earth. If this is true and I also care about these other parts of the multiverse, then superrationality gives me a reason to be nice to these value systems. If I am nice toward them, then this makes it more likely that similar agents will also take my values into account when they make decisions in their parts of the multiverse (Oesterheld 2017).

Is it ok to cheat in secret?

Many of the reasons listed, especially the stronger ones, only have consequences when your cooperation or defection is visible: IPDs, evolved emotions, reputation, norms and universal rules, and encouraging global cooperation. Assuming the other, remaining reasons are weak enough, doesn't this license us to trash other value systems in our private decisions, so long as no one will find out?

No. There's too much risk of it backfiring in your face. One slip-up could damage your reputation, and your deception might show through in ways you don't realize. I think it's best to actually be someone who wants to help other value systems, regardless of whether others find out. This may sound suboptimal, and maybe there is a little bit of faith to it, but consider that almost everyone in the world recognizes this idea at least to some extent, such as in the law of karma or the Golden Rule. If it were an "irrational" policy for social success, why would we see it so widespread? Eliezer Yudkowsky: "Be careful of [...] any time you find yourself defining the 'winner' as someone other than the agent who is currently smiling from on top of a giant heap of utility."

Not hiding your defection against others is a special case of the general argument for honesty. This isn't to say you always have to be cooperative, but if you're not, don't go out of your way to hide it.

I regard not trashing other value systems as a weak ethical injunction for guiding my decisions. I recommend reading Eliezer Yudkowsky's sequence for greater elaboration of why ethical injunctions can win better than naive act-utilitarianism. The injunction not to step on others' toes is not as strong as the injunction against lying, stealing, and so on; indeed, it's never possible to not step on some people's toes. But in cases where it's relatively easy to avoid causing major harm to what a significant number of others care about, you should try to avoid causing that harm. Of course, if others are substantially and unremorsefully stepping on your toes, then this advice no longer applies, and you should stop being nice to them, until they start being cooperative again.

Risks to being nice

Being nice is not guaranteed to yield the best outcomes. There are reasons we evolved selfish motives as well as altruistic ones, and the "nice guys finish last" slogan is sometimes accurate. The other side might cheat you and get away with it. Maybe the IPD structure isn't sufficient to guarantee cooperation. Maybe it's a tragedy of the commons (multi-player prisoner's dilemma) where it's much harder to change defection to cooperation, and your efforts fail to make their intended impact.

It's important to assess these risks and be conscious of when your efforts at cooperation fail. But remember: Being nice means defecting on the other side if it defects on you. Niceness doesn't mean being exploited permanently. It's better to try a gesture of cooperation first rather than assume it won't work; predicting defection may become a self-fulfilling prophecy. In addition, I think niceness is increasingly rewarded in our more interconnected and transparent world, facilitated by governments and media. Our ancestral selfish tendencies probably overfire relative to the strategic optimum.

However, there are many real-world cases where niceness fails. One striking demonstration of this was the attempts by US president Barack Obama to compromise with opposing Republicans, which repeatedly resulted in Obama and the Democrats making concessions for nothing in return. This is not how to play an iterated prisoner's dilemma. If niceness repeatedly fails to achieve cooperation, then one has to go on the offensive instead.

If you hold a popular position, then I think it's often successful to firmly stand your ground rather than making concessions in response to squeaky-wheel opponents. Cenk Uygur: "Do you know what works in politics? Strength."

Cooperation can also entail overhead costs in terms of negotiating and verifying commitments, as well as assessing whether an apparent concession is actually a concession or just something the other side was already going to do. For small interactions, these overhead costs may outweigh the benefits of trade. Verifying cooperation is often easy if a business partner does a favor for you, because you can see what the favor is, and it's unlikely the partner would have done the favor without expecting anything in return. Verifying cooperation is often harder for big organizations or governments, because (1) the impacts of a change in policy can be diffuse and costly to measure and (2) it's difficult to know how much the change in policy is due to cooperation versus how much it's something the organization was going to do anyway.

How nice should you be?

I hope it's clear that at least in some cases, being nice pays. The harder question is how nice to be, i.e., above what threshold of cost to yourself do you stop providing benefits to others?

If bargains could be transacted in an airtight fashion, and if utility was completely transferable, then the answer would be simple: Maximize total social "pie," because if you can provide someone a benefit B that's bigger than its cost C to yourself, the other person could pay you back in the amount C, and then the surplus B-C could be divided between the two of you, making you both better off. Alas, most situations in life aren't airtight, so intuitively, in many cases it would not be in your interest to purely maximize pie. There might be some noise or cheating between your incurring the cost and someone else paying back a higher benefit. Not everything you do is fully recognized and rewarded by others, especially when they assume that you're helping them because you intrinsically value their cause rather than just to be nice despite not caring or even slightly disvaluing it.

How nice to be depends on the details of the social situation, expectations, norms, and enforcement mechanisms involved. There's some balance to strike between purely pushing your own agenda without regard to what anyone else cares about versus purely helping all value systems without any preference for your personal concerns. One could construct various game-theoretic models, but the world is complicated, and interactions are not just, say, a series of two-player IPDs. It could also help to look at real examples in society for where to strike this balance.

Applications to space colonization

Being nice suggests that people whose primary concern is reducing suffering should accept others' ambitions to colonize space, so long as colonizers work harder to reduce the suffering that space colonization entails. On the flip side, being nice also means that those who do want to colonize space should focus more on making space colonization better (more humane and better governed to stay in line with our values) rather than making it more likely to happen.