Saturday, March 18, 2017

Utilitarianism: We are probably doing the math wrong.

Summary: Almost all utilitarian reasoning about specific interventions and thought experiments is wrong, because it fails to account for the fact that taking a thing away from people causes a utility loss that is significantly greater then the utility gain they would get from acquiring the thing. For any significant permanent change in circumstance, making people worse off causes four to six times the utility change. With a pure utilitarian calculus, intervention is therefore only justified if the gains are several times the losses.

Epistemic status [edited]: Uncertain. I may be exaggerating or overgeneralizing a temporary effect that should only be counted as a second-order term in the calculation in some or most situations. More research into the permanence of these feelings over time is required. It would also be very valuable to do a trolley problem survey with seven instead of five people on the track and see if that changes things. Thanks to everyone who discussed this.
Unexplored. Although this seems obvious now, I did not realize it last week. I have been studying related philosophy issues for over a decade and have never been exposed to any discussion of this point, either supporting or dismissing it. I have specifically looked for evidence that anyone else has made this point, and failed to find any mention of it. However, I am very suspicious of any assumption that I am the first person to realize an important thing. There is a valid outside-view reason that I might be in a position to do so (I have far more knowledge of and experience with cost-benefit analysis and preference valuation than most people who consider these questions, and I just attended a conference of the Society for Benefit Cost Analysis where these issues were explored in presentations) but I should still be skeptical of my reasoning. Feedback is appreciated.

Utilitarian Calculus


Consider the following moral questions:
1) Should you shove a fat man in front of a trolley to prevent the trolley from running over five people who are otherwise doomed?
2) Should you support a public policy that makes health insurance twice as expensive for 10% of the population, while giving equivalent free insurance to a different 20% of the population?
3) If the current social system makes 10% of the population happy (utility 20% above baseline) while oppressing 30% of the population (utility 20% below baseline), should you overthrow the system and institute an egalitarian one?

There are many ways to approach these moral questions, but for all of them, a utilitarian will almost always answer yes, under the assumption that the intervention will increase aggregate utility.

However, this 'utilitarian' answer ignores the robust experimental evidence on the large and persistent differences between willingness to accept (the amount people have to be compensated to accept a loss) and willingness to pay (the amount people would pay for a gain):


People value gains significantly less than they value losses, i.e. the utility increase from obtaining a thing is much less than the utility decrease from losing the same thing. For money, time and private goods (things that are easily traded and substituted or that people have in abundance), people 'only' value losses about 40-60% more than they value gains. But for irreversible, non-tradeable changes in their circumstances, of the kinds involved in most thought experiments and public policy questions, people value losses four to six times more than they value gains. This difference between willingness to pay and willingness to accept is not primarily driven by the declining marginal utility of wealth. It is observed for changes over scales where the relationship between money and utility is approximately linear, and also observed for direct tradeoffs that do not involve money.

Therefore, all three of the interventions above will reduce aggregate utility. The utility loss experienced by the losers will be greater than the utility gain experienced by the winners. A utilitarian should not support them based only on the evidence presented. Other moral reasons must be invoked to justify the policy, or it should be shown that there are relevant side effects that change the utilitarian calculus.

Policy Implications


1) Utilitarians should not support additional income redistribution unless the marginal utility of wealth for the people being taxed is less than 1/4th to 1/6th the marginal utility of wealth for the people receiving the benefits.
2) Utilitarians should not support coercive taxation to produce public goods unless the value of the public good is at least four to six times its production cost.
3) Utilitarians should not support coercive health and safety regulations unless the monetized benefits are at least four to six times the costs.
4) With the caveat that changing utility functions is dangerous and questionable, teaching people to value losses and gains more equally may cause a large increase in utility.

Rationality


Many people might object that it is irrational to value losses so much more than gains. This is correct, at least for relatively wealthy people in the modern world (For people operating closer to subsistence, a loss is likely to kill you while a gain gives you relatively less benefit, so it is rational to be risk-averse.) Being more risk-neutral will encourage you to take chances and make tradeoffs that will dramatically improve your life. Gains and losses that do not cause significant changes in your overall wealth should be valued the same.

Given that most philosophical discussion happens in an abstract rational setting, and that utilitarians tend to be people with a more abstract and rational thinking style, and that the literature on the WTA/WTP ratio did not exist 30 years ago, and it is still new enough that most people have not had time to internalize its findings, it is understandable that all previous utilitarian discussion had the unquestioned default assumption that a gain and a loss are to be valued the same, the way a rational agent would value them.

However, utilitarianism is about maximizing the utility experienced by actual sentient entities in the real world. Maximizing the utility that would be experienced by imaginary rational risk-neutral actors is doing something that has no connection to reality. Imposing our will on others to maximize an imaginary utility function that we think they should have is insane tyranny.

Fairness


The utilitarian position, properly understood, is extremely conservative and dramatically favors the status quo, even if the status quo is horribly unfair and a violation of rights. However, if you value rights and fairness for any reason other than their instrumental ability to improve aggregate utility, you are not a utilitarian.

Future generations


When calculating utility for people who have not yet been assigned an endowment, i.e. those behind the veil of ignorance, the traditional utilitarian calculus still applies, because there is no status quo and therefore no gains or losses. Any policy that makes total utility greater and also more equally distributed, such as #3 above, is unambiguously good. The short-term utility loss from implementing the policy may be outweighed by the utility gains for future generations. However, determining this for certain requires making decisions about discounting future utility, and the moral status of people who do not yet exist, which are beyond the scope of this post.

Final Thoughts


For the past several decades, many government agencies have been using improper gain=loss utilitarian calculus to make public policy decisions. Some of the current political upheaval can be traced to the failure of this approach, specifically its failure to adequately measure the utility loss of taking things away from people or imposing burdens on them.

If you are a utilitarian, you find these policy conclusions repugnant, and you cannot find any problem with my math or my understanding of the relevant literature, then please take a moment to build empathy for people who have always found utilitarian conclusions to be repugnant. Then I recommend examining Parfit's synthesis of rule consequentialism, contractualism, and steelmanned Kantian deontology.

2 comments:

Robi Rahman said...

Where can I find a steelmanned version of Kantian deontology?

AK said...

1. "Utility" is a myth.

2. "Utility" is a figment of the imagination(s) of those who use it to rationalize their real objective, which is almost always their hatred of somebody else's freedom of decision-making.

3. The distinction between "gains" and "losses" is interesting: consider what happens when somebody switches (in their own mind) an anticipated gain from possible to expected. (Like counting their chickens before they hatch.)

When/if that anticipated gain doesn't pan out, they will feel a sense of grievance, which could stimulate criminal or other "anti-social" behavior.