Tuesday, October 3, 2017

Color Sorting

Color was the topic of our last LessWrong meeting. We talked about color theory and physics, eyesight, aesthetics, different perceptions, etc.

To illustrate how people think differently about color, I brought six identical bags of colored Legos for people to sort into groups:



I randomized the number of groups by rolling an eight-sided die and adding four. The roll was 6, for 10 groups. When asked what the purpose of the sorting was, I replied "Teaching a young child about colors."

As I expected, there was a lot of variation in how people sorted the colors:





 


After we were done, we compared and discussed our sorting. This lasted a while.

Then, a family of Chinese tourists came up to us and asked what we were doing. (We meet in a public place.) We explained the activity, and invited them to sort the colors into groups however they wanted. A middle-aged woman and an older woman started sorting.

The middle-aged woman's sort was roughly similar to ours:


But the older woman, who did not speak English, made radically different choices. She basically sorted them by saturation rather than hue. We were especially fascinated by how she put bricks in different groups that we thought were identically colored, but upon closer inspection had slight differences in color due to age:


The exercise was strong evidence that people see and think about color differently, even when very culturally similar, and that people from different cultures can see things very differently.

Saturday, March 18, 2017

Utilitarianism: We are probably doing the math wrong.

Summary: Almost all utilitarian reasoning about specific interventions and thought experiments is wrong, because it fails to account for the fact that taking a thing away from people causes a utility loss that is significantly greater then the utility gain they would get from acquiring the thing. For any significant permanent change in circumstance, making people worse off causes four to six times the utility change. With a pure utilitarian calculus, intervention is therefore only justified if the gains are several times the losses.

Epistemic status [edited]: Uncertain. I may be exaggerating or overgeneralizing a temporary effect that should only be counted as a second-order term in the calculation in some or most situations. More research into the permanence of these feelings over time is required. It would also be very valuable to do a trolley problem survey with seven instead of five people on the track and see if that changes things. Thanks to everyone who discussed this.
Unexplored. Although this seems obvious now, I did not realize it last week. I have been studying related philosophy issues for over a decade and have never been exposed to any discussion of this point, either supporting or dismissing it. I have specifically looked for evidence that anyone else has made this point, and failed to find any mention of it. However, I am very suspicious of any assumption that I am the first person to realize an important thing. There is a valid outside-view reason that I might be in a position to do so (I have far more knowledge of and experience with cost-benefit analysis and preference valuation than most people who consider these questions, and I just attended a conference of the Society for Benefit Cost Analysis where these issues were explored in presentations) but I should still be skeptical of my reasoning. Feedback is appreciated.

Utilitarian Calculus


Consider the following moral questions:
1) Should you shove a fat man in front of a trolley to prevent the trolley from running over five people who are otherwise doomed?
2) Should you support a public policy that makes health insurance twice as expensive for 10% of the population, while giving equivalent free insurance to a different 20% of the population?
3) If the current social system makes 10% of the population happy (utility 20% above baseline) while oppressing 30% of the population (utility 20% below baseline), should you overthrow the system and institute an egalitarian one?

There are many ways to approach these moral questions, but for all of them, a utilitarian will almost always answer yes, under the assumption that the intervention will increase aggregate utility.

However, this 'utilitarian' answer ignores the robust experimental evidence on the large and persistent differences between willingness to accept (the amount people have to be compensated to accept a loss) and willingness to pay (the amount people would pay for a gain):


People value gains significantly less than they value losses, i.e. the utility increase from obtaining a thing is much less than the utility decrease from losing the same thing. For money, time and private goods (things that are easily traded and substituted or that people have in abundance), people 'only' value losses about 40-60% more than they value gains. But for irreversible, non-tradeable changes in their circumstances, of the kinds involved in most thought experiments and public policy questions, people value losses four to six times more than they value gains. This difference between willingness to pay and willingness to accept is not primarily driven by the declining marginal utility of wealth. It is observed for changes over scales where the relationship between money and utility is approximately linear, and also observed for direct tradeoffs that do not involve money.

Therefore, all three of the interventions above will reduce aggregate utility. The utility loss experienced by the losers will be greater than the utility gain experienced by the winners. A utilitarian should not support them based only on the evidence presented. Other moral reasons must be invoked to justify the policy, or it should be shown that there are relevant side effects that change the utilitarian calculus.

Policy Implications


1) Utilitarians should not support additional income redistribution unless the marginal utility of wealth for the people being taxed is less than 1/4th to 1/6th the marginal utility of wealth for the people receiving the benefits.
2) Utilitarians should not support coercive taxation to produce public goods unless the value of the public good is at least four to six times its production cost.
3) Utilitarians should not support coercive health and safety regulations unless the monetized benefits are at least four to six times the costs.
4) With the caveat that changing utility functions is dangerous and questionable, teaching people to value losses and gains more equally may cause a large increase in utility.

Rationality


Many people might object that it is irrational to value losses so much more than gains. This is correct, at least for relatively wealthy people in the modern world (For people operating closer to subsistence, a loss is likely to kill you while a gain gives you relatively less benefit, so it is rational to be risk-averse.) Being more risk-neutral will encourage you to take chances and make tradeoffs that will dramatically improve your life. Gains and losses that do not cause significant changes in your overall wealth should be valued the same.

Given that most philosophical discussion happens in an abstract rational setting, and that utilitarians tend to be people with a more abstract and rational thinking style, and that the literature on the WTA/WTP ratio did not exist 30 years ago, and it is still new enough that most people have not had time to internalize its findings, it is understandable that all previous utilitarian discussion had the unquestioned default assumption that a gain and a loss are to be valued the same, the way a rational agent would value them.

However, utilitarianism is about maximizing the utility experienced by actual sentient entities in the real world. Maximizing the utility that would be experienced by imaginary rational risk-neutral actors is doing something that has no connection to reality. Imposing our will on others to maximize an imaginary utility function that we think they should have is insane tyranny.

Fairness


The utilitarian position, properly understood, is extremely conservative and dramatically favors the status quo, even if the status quo is horribly unfair and a violation of rights. However, if you value rights and fairness for any reason other than their instrumental ability to improve aggregate utility, you are not a utilitarian.

Future generations


When calculating utility for people who have not yet been assigned an endowment, i.e. those behind the veil of ignorance, the traditional utilitarian calculus still applies, because there is no status quo and therefore no gains or losses. Any policy that makes total utility greater and also more equally distributed, such as #3 above, is unambiguously good. The short-term utility loss from implementing the policy may be outweighed by the utility gains for future generations. However, determining this for certain requires making decisions about discounting future utility, and the moral status of people who do not yet exist, which are beyond the scope of this post.

Final Thoughts


For the past several decades, many government agencies have been using improper gain=loss utilitarian calculus to make public policy decisions. Some of the current political upheaval can be traced to the failure of this approach, specifically its failure to adequately measure the utility loss of taking things away from people or imposing burdens on them.

If you are a utilitarian, you find these policy conclusions repugnant, and you cannot find any problem with my math or my understanding of the relevant literature, then please take a moment to build empathy for people who have always found utilitarian conclusions to be repugnant. Then I recommend examining Parfit's synthesis of rule consequentialism, contractualism, and steelmanned Kantian deontology.

Tuesday, March 14, 2017

The World of the Goblins

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it. - Nick Bostrom

Imagine a world inhabited by a species, call them goblins, that is just below the threshold of mental capacity required to start a technological civilization. The average goblin, or tribe of goblins, is just barely too stupid for civilization. Goblins can talk, and argue, and form coalitions, and play politics and signal status, and they can look at the world around them and dream and speculate and make art. They can use technology if someone smarter tells them now, and can sometimes even make simple tools and innovations if properly trained, but they just do not have what it takes to actually start a civilization on their own. Unless they have someone smarter to steal from, their society will inevitably forget important things and regress into stone-age savagery.

However, there is a genetic variation among goblins. Sometimes, by random chance, there will be a tribe whose average mental capacity raises above the civilization threshold, for a while, until mean reversion takes them below the threshold again.

What would you observe in your world, if you were a goblin?

You would observe a world filled with the ruins of fallen civilizations. You would see the crumbling remains of great buildings and structures that nobody knows how to build. You might see that these fallen civilizations transformed the land, making roads or canals or even altering entire ecosystems to suit their needs. There would be artifacts from these civilizations, strange items that nobody knows how to make. Sometimes nobody can even guess what they are meant to be used for.

If you were part of a tribe that was clever and curious enough to translate and read texts from these ruins, you might learn their history. You would know that, sometimes, a tribe of goblins would suddenly form a civilization, gain great wealth and power, and conquer and enslave all of the surrounding tribes. But then, over time, that civilization would, for some reason, become less capable. It would coast along, accomplishing little, feeding off the riches of its glory days, until some kind of shock like a natural disaster, resource shortage, or outside invasion would destroy it and leave nothing but ruins. 

If you were smart, you might wonder exactly why these great ancient civilizations were inevitably destroyed by trivial things, at a time when they had far more resources and power then they did when they were overcoming much harder obstacles, but you are probably not smart enough to ask questions like that.

If you were a goblin in the later years of one of these civilizations, what would you observe?

You would observe that your ancestors used long words you can barely understand, and sentences with grammar that you can barely parse. They would speak of concepts that mean little to you. They might be deeply concerned with things that seem bizarre or meaningless.

You would observe that goblins in other tribes outside your civilization can never seem to form or sustain a working civilization on their own, no matter how many resources or tools you give them.

You would observe your civilization slowly decaying. You would see that it takes your people a lot of time and money to do things that were once done swiftly and cheaply. You would observe that a lot of things seem to cost more, or are of worse quality. You would see things falling apart faster then they can be built or repaired.

You might observe different parts of your civilization decaying at different rates. If your civilization happens to have some kind of system that identifies the smarter goblins and collects them in special places, then those special places will function well, and may even advance, but the places that you took the smart goblins from will inevitably regress into barbarism in a generation or two.

Different factions in your civilization would all blame different things for the decay. If you were smart, you would notice that each faction blames the thing that it has always blamed for everything bad, and recommends solutions that would increase the wealth and social status of its members. But you are probably not that smart, so you accept your faction's explanation, and believe that things will be good again as soon as you gain power over the other faction and make them do what you say.

Monday, March 13, 2017

Intellectual Property Law: Costs vs Benefits

Nothing I say here is original; it is heavily influenced by Tabarrok's thinking on the matter. This post started as a Facebook comment in response to a friend's question; I am putting it here so I will be able to find it again and refer to it easily.

The question was "To what extent should governments try to enforce intellectual property rights? ... How would we determine, in principle, whether intellectual property laws are a good idea for governments to keep enforcing? (And, what's your best guess as to what we should be doing right now?)"

The default Economist answer to any question of the form 'To what extent should governments do X?' is always 'Until the marginal costs of doing more X start to exceed the marginal benefits.' I am only half joking when I say that the procedure for getting an Economics PhD is for people to drill the decision procedure 'Do things until marginal costs exceed marginal benefits, then stop.' into your head until it would be the first thing you mumble if you got dragged out of bed in the middle of the night and asked a question of this form.

The marginal cost of each additional year of intellectual property protection is the monopoly deadweight loss, plus the loss of knowledge diffusion and the innovations that would have been created based on the thing if it was a public good. This latter term is often dramatically underestimated. This marginal cost is probably roughly constant over time for most things, but will increase over time for important foundational innovations.

The marginal benefit is the incentive to innovate and create the thing that is generated by the difference between the monopoly profits under the government-IP system and the state of nature where people keep things hidden. Note that monopoly profits are not the same as the deadweight loss; they are just a transfer and therefore not a social cost. This marginal benefit decreases over time; older IP is almost always less valuable to a monopolist because substitutes will be developed.

Finding the exact point where marginal benefit equals marginal cost is always tricky in practice, but this gives us a few obvious guidelines:

Different types of IP should have different types of IP laws. IP law should be based on how expensive something is to create, how likely people are to create it anyway for non-profit motives, what the expected profit flow looks like, and how valuable it would be in the public domain.

Some things should not get any IP protection. Others should get a lot.

If 90% of the profits from a thing come in the opening weekend, and it takes more than a week to copy it, there is no need for IP. (If copying is instant, the proper length for IP protection is a few weeks.)

If we routinely see individuals producing a thing without any expectation of payment, and/or producing it is cheap and brings status rewards, there should be no IP protection. All it does is further reward people who won the attention/status lottery.

The 20-year patent for expensively researched industrial processes seems like a decent balance when the market for the product is small; larger markets and easier scale-up imply shorter optimal patent terms.

Further exploration can be left as an exercise for the reader.

Thursday, March 2, 2017

The Story of a Lucky Economist

The career guide 80,000 Hours highly recommends getting an Economics PhD. I completely agree with their assessment. If you have an academic personality and a decent work ethic, and are lucky enough to be born with high analytical intelligence, there are few better options. If you identity as an Effective Altruist and want to make the world a better place, becoming an economist (and doing anything other than teach at a low-ranked school) is an excellent career choice for earning to give (collecting a good paycheck and giving a lot to charity) and, if you get lucky, can also be good for direct impact (personally making the world a better place.)

I got lucky. This is the story of my direct impact as an economist working for a US federal government agency. Please do not expect that this is likely to happen to you if you become an economist and start to work for a government. But something like it might happen.

Before I tell my story, I will tell the story of C, one of the veteran economists in our office. When she was interviewing me for the job and telling me about the work that they do, she told the story of her direct impact, which I will heavily paraphrase.

When the agency produces a regulation, they get scientists, lawyers, and policy experts together in a room to write the rule. If the team is well-managed and/or the economists are good at making friends, there will also be an economist in the room.

C was in the room when they were writing rules for irrigation water quality. There was some concern that pathogens in the irrigation water would contaminate the final food, so the room's consensus was that they would require all irrigation water to meet the same standards as drinking water. Then C started asking questions.

C: "Are we going to apply this rule to drip irrigation?"
Someone: "Yes, I don't see why not."
C: "Are we going to ban people from fertilizing plants with manure?"
Someone: "No, of course not, it is safe under the right conditions."
C: "So, you are writing a rule that would force people to put drinking water on manure?"

Everyone else looked at each other, and then realized that they should loosen the water quality standards under certain conditions.

C explained that one of our main jobs as agency economists was to think about the big picture and be the voice of 'common sense' in the room. It is surprisingly difficult to find people who can do this, and the 'PhD Economist' credential is a signal that you might be the kind of person who can escape groupthink and see neglected but important side effects and chains of causality.

This story was perhaps the highlight of her 20-year career in the agency. She probably saved farmers tens of millions of dollars, in total. In that one conversation, she repaid the country the value of her entire career's salary. This is what you can expect your direct impact to be as an agency economist: find ways to save people money and make life a little less difficult for them, while still accomplishing the mission of the agency. This is a good life and a noble calling, for those who think about efficiency and the big picture, and it is unreasonable to expect more.

I got more.

One day, the boss asked me to do a quick estimate of the costs and benefits of removing the GRAS (generally recognized as safe) status of PHOs (partially hydrogenated oils). You may know this as the 'trans fat ban'. Because this was technically just an exercise of existing authority and not a new regulation, it did not need an official economic analysis, but management thought it would be good to have some idea of the numbers before moving forward.

I did not know what to expect at first, but after I did the research, I found that the numbers would be huge. The costs would be measured in billions, and the lives saved would be measured in tens of thousands. There was very strong scientific evidence that trans fats are uniquely toxic among all commonly used food additives, and banning them would be the biggest public health action on decades. As a conservative estimate, they were killing eight Americans every day.

Once I realized this, I was consumed with a need to do everything I could to make the action publish as soon as possible, while making sure that it and my analysis would survive any legal challenge. I was shocked that few others in the agency realized how big and how important this was. Most of management just saw it as another thing on a big list of things that the agency was doing, and either did not know or care about the numbers involved. There was nothing I could do to push it out faster, aside from explaining the rule's huge positive effects to everyone who would find me credible, which I did. But talk like that is common in the agency, because everybody wants to push out their favorite regulation.

But I could make a real positive impact by making absolutely sure that I was not delaying the action. I studied the process, identified the likely times when things would stop because people were waiting for economic numbers, and prepared for those times. I made sure that all of my spreadsheets were flexible and complete enough to, with a few minutes' work, accept any variation in inputs and produce new output tables. I occasionally wrote several versions of the analysis ahead of time while waiting for management decisions, one for each plausible decision, so that I could turn it around in hours of being notified of the decision. When necessary, I worked lots of overtime to get things out the next day.

Basically, I identified the times when I was on the critical path, and did everything possible to shorten the time on that critical path. I do not know how successful I was. But even if I got the rule out one week faster, I saved over 50 lives. And if my analysis and contributions made the rule 0.1% more likely to survive a court challenge, then I saved over 25 lives. Direct impact numbers can get scary large when dealing with major public health initiatives that cause a single-digit percentage change in the heart attack rates of a country of 300 million.

However, depending on how you choose to interpret the Value of a Statistical Life, my second direct impact may have actually saved more lives.

Congress had passed a law requiring the agency to pass several major new regulations. The agency scheduled the regulations, starting with the most important ones that they had the most knowledge of. For the rest, they gathered information and started talking to a lot of affected producers to figure out what to do. One part of this law was a mandate to write rules about a thing that no agency in the world had ever dealt with before, and this was scheduled to be last, after a lot of research and discussion.

However, a consumer group sued the agency to publish the regulations faster. The agency lost, and was handed a court-ordered deadline for all rules, including the novel rule.

I was pulled off all other projects and assigned to this rule full-time. There were about a dozen of us in the room writing the first draft over a period of about two weeks. People would propose ideas and ask me questions about likely effects. I would poke around on the Internet and/or our internal databases for about an hour, crunch some numbers, and then give them a rough estimate.

It quickly became apparent that, even though everyone was trying to write the rule to cause as little burden as possible, following our congressional mandate would cost a lot of money.

Most rules exempt very small businesses from most or all requirements. In past rules, 'very small business' was typically defined as having up to $250,000 to $1 million in annual sales. I suggested raising the 'very small business' threshold and gave a menu of options, with cost savings and market coverage for a variety of cutoffs from $1 million up to $50 million. I went that high not because I expected anyone to choose that option, but because I understand framing and anchoring.

The team chose a threshold of $10 million in annual sales, which would save about $150 million a year compared to a $1 million cutoff, and still cover over 97% of the market by sales volume. After working to find plausible legal and scientific reasons that agency lawyers could use to justify this precedent-breaking number in court if necessary, the team agreed to propose the change to management, and management agreed.

My understanding of the power-law distribution of firm sizes, and my ability to communicate its practical effects, had saved small businesses about $150 million a year in compliance costs.

The Value of a Statistical Life in the USA is about $10 million. People will, on average, spend about $10,000 to reduce their chances of dying by one in a thousand. By saving people $150 million a year in compliance costs, I gave them enough resources to invest in things that are expected to save 15 lives a year. Assuming that the rule lasts for about 30 years before being rewritten, I saved the statistical equivalent of 450 lives with a couple insights and a few days of work.

Of course, I also caused a small increase in the chance that a very unlikely but horrible thing would happen. The increased chance, multiplied by the base rate and the expected casualties and other economic costs, means that I caused the statistical equivalent of about 50 deaths by encouraging the team to exempt small producers. So I can 'only' claim about 400 lives saved on net.

I do not expect anything like this to happen again in my career. I was lucky enough to be in the right place at the right time, twice. Laws and regulations with that much impact only come along about once a decade on average. For the rest of my life, I will be doing much smaller things.

The main message of this story is that the direct impact of a government economist is extremely high-variance. Most of the time, you will do nothing to make the world better. Occasionally, you will do something that prevents a few million dollars from being wasted. And if you get very very lucky, you might do something that saves the statistical equivalent of hundreds of lives.

Technical Appendix for Effective Altruists

If you are a PhD economist working for the US federal government, you will typically start at GS-12 and quickly work your way up to GS-14 (currently about $120,000 a year in the DC area). Then you will stay at GS-14 for the rest of your career unless you work your way up to senior management. This is less than private industry and consulting, although not that much less when you consider the value of benefits, and is much less stressful and time-consuming. I have an excellent lifestyle, and earn enough to painlessly give over $30,000 a year to charity.

If you are the kind of person who wants to work 80 hours a week and make a name for yourself, earn lots of money, and/or have more direct impact, then I still recommend starting your career as an agency economist. Government work does not force you to work as hard as industry or academia just to stay afloat. You will have extra time and energy in your week, so you can, if you choose, use that time for self-directed career advancement and make your agency job an excellent springboard to many different high-flying careers. I personally have no plans of exercising this option, but I know many people who have. Some publish lots of papers, others network and get promoted in the government, and others leave for high-paying mid-career private-sector jobs.

Everything I have discussed is only relevant to agencies where economists are actually involved in making policy decisions or new regulations. There are some places in government that are just research shops, pushing out academic publications. Avoid them. Other places have economists churn out standard reports and analyses for people to use. I do not know how impactful this is, but it is probably still a good job.

I have been able to use my knowledge to help with Effective Altruism Policy Analytics, and to give advice to many people in the LW/EA community. If you have further questions, feel free to ask in the comments here or in another location. If you are seriously considering this career, I am available to talk. I am also available as a dissertation advisor for any EA-affiliated PhD students aiming for an agency career (having advisors outside your school is an excellent signal, and I have worked on government hiring committees and know what they look for in job market papers).