Saturday, March 18, 2017

Utilitarianism: We are probably doing the math wrong.

Summary: Almost all utilitarian reasoning about specific interventions and thought experiments is wrong, because it fails to account for the fact that taking a thing away from people causes a utility loss that is significantly greater then the utility gain they would get from acquiring the thing. For any significant permanent change in circumstance, making people worse off causes four to six times the utility change. With a pure utilitarian calculus, intervention is therefore only justified if the gains are several times the losses.

Epistemic status [edited]: Uncertain. I may be exaggerating or overgeneralizing a temporary effect that should only be counted as a second-order term in the calculation in some or most situations. More research into the permanence of these feelings over time is required. It would also be very valuable to do a trolley problem survey with seven instead of five people on the track and see if that changes things. Thanks to everyone who discussed this.
Unexplored. Although this seems obvious now, I did not realize it last week. I have been studying related philosophy issues for over a decade and have never been exposed to any discussion of this point, either supporting or dismissing it. I have specifically looked for evidence that anyone else has made this point, and failed to find any mention of it. However, I am very suspicious of any assumption that I am the first person to realize an important thing. There is a valid outside-view reason that I might be in a position to do so (I have far more knowledge of and experience with cost-benefit analysis and preference valuation than most people who consider these questions, and I just attended a conference of the Society for Benefit Cost Analysis where these issues were explored in presentations) but I should still be skeptical of my reasoning. Feedback is appreciated.

Utilitarian Calculus


Consider the following moral questions:
1) Should you shove a fat man in front of a trolley to prevent the trolley from running over five people who are otherwise doomed?
2) Should you support a public policy that makes health insurance twice as expensive for 10% of the population, while giving equivalent free insurance to a different 20% of the population?
3) If the current social system makes 10% of the population happy (utility 20% above baseline) while oppressing 30% of the population (utility 20% below baseline), should you overthrow the system and institute an egalitarian one?

There are many ways to approach these moral questions, but for all of them, a utilitarian will almost always answer yes, under the assumption that the intervention will increase aggregate utility.

However, this 'utilitarian' answer ignores the robust experimental evidence on the large and persistent differences between willingness to accept (the amount people have to be compensated to accept a loss) and willingness to pay (the amount people would pay for a gain):


People value gains significantly less than they value losses, i.e. the utility increase from obtaining a thing is much less than the utility decrease from losing the same thing. For money, time and private goods (things that are easily traded and substituted or that people have in abundance), people 'only' value losses about 40-60% more than they value gains. But for irreversible, non-tradeable changes in their circumstances, of the kinds involved in most thought experiments and public policy questions, people value losses four to six times more than they value gains. This difference between willingness to pay and willingness to accept is not primarily driven by the declining marginal utility of wealth. It is observed for changes over scales where the relationship between money and utility is approximately linear, and also observed for direct tradeoffs that do not involve money.

Therefore, all three of the interventions above will reduce aggregate utility. The utility loss experienced by the losers will be greater than the utility gain experienced by the winners. A utilitarian should not support them based only on the evidence presented. Other moral reasons must be invoked to justify the policy, or it should be shown that there are relevant side effects that change the utilitarian calculus.

Policy Implications


1) Utilitarians should not support additional income redistribution unless the marginal utility of wealth for the people being taxed is less than 1/4th to 1/6th the marginal utility of wealth for the people receiving the benefits.
2) Utilitarians should not support coercive taxation to produce public goods unless the value of the public good is at least four to six times its production cost.
3) Utilitarians should not support coercive health and safety regulations unless the monetized benefits are at least four to six times the costs.
4) With the caveat that changing utility functions is dangerous and questionable, teaching people to value losses and gains more equally may cause a large increase in utility.

Rationality


Many people might object that it is irrational to value losses so much more than gains. This is correct, at least for relatively wealthy people in the modern world (For people operating closer to subsistence, a loss is likely to kill you while a gain gives you relatively less benefit, so it is rational to be risk-averse.) Being more risk-neutral will encourage you to take chances and make tradeoffs that will dramatically improve your life. Gains and losses that do not cause significant changes in your overall wealth should be valued the same.

Given that most philosophical discussion happens in an abstract rational setting, and that utilitarians tend to be people with a more abstract and rational thinking style, and that the literature on the WTA/WTP ratio did not exist 30 years ago, and it is still new enough that most people have not had time to internalize its findings, it is understandable that all previous utilitarian discussion had the unquestioned default assumption that a gain and a loss are to be valued the same, the way a rational agent would value them.

However, utilitarianism is about maximizing the utility experienced by actual sentient entities in the real world. Maximizing the utility that would be experienced by imaginary rational risk-neutral actors is doing something that has no connection to reality. Imposing our will on others to maximize an imaginary utility function that we think they should have is insane tyranny.

Fairness


The utilitarian position, properly understood, is extremely conservative and dramatically favors the status quo, even if the status quo is horribly unfair and a violation of rights. However, if you value rights and fairness for any reason other than their instrumental ability to improve aggregate utility, you are not a utilitarian.

Future generations


When calculating utility for people who have not yet been assigned an endowment, i.e. those behind the veil of ignorance, the traditional utilitarian calculus still applies, because there is no status quo and therefore no gains or losses. Any policy that makes total utility greater and also more equally distributed, such as #3 above, is unambiguously good. The short-term utility loss from implementing the policy may be outweighed by the utility gains for future generations. However, determining this for certain requires making decisions about discounting future utility, and the moral status of people who do not yet exist, which are beyond the scope of this post.

Final Thoughts


For the past several decades, many government agencies have been using improper gain=loss utilitarian calculus to make public policy decisions. Some of the current political upheaval can be traced to the failure of this approach, specifically its failure to adequately measure the utility loss of taking things away from people or imposing burdens on them.

If you are a utilitarian, you find these policy conclusions repugnant, and you cannot find any problem with my math or my understanding of the relevant literature, then please take a moment to build empathy for people who have always found utilitarian conclusions to be repugnant. Then I recommend examining Parfit's synthesis of rule consequentialism, contractualism, and steelmanned Kantian deontology.

Tuesday, March 14, 2017

The World of the Goblins

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it. - Nick Bostrom

Imagine a world inhabited by a species, call them goblins, that is just below the threshold of mental capacity required to start a technological civilization. The average goblin, or tribe of goblins, is just barely too stupid for civilization. Goblins can talk, and argue, and form coalitions, and play politics and signal status, and they can look at the world around them and dream and speculate and make art. They can use technology if someone smarter tells them now, and can sometimes even make simple tools and innovations if properly trained, but they just do not have what it takes to actually start a civilization on their own. Unless they have someone smarter to steal from, their society will inevitably forget important things and regress into stone-age savagery.

However, there is a genetic variation among goblins. Sometimes, by random chance, there will be a tribe whose average mental capacity raises above the civilization threshold, for a while, until mean reversion takes them below the threshold again.

What would you observe in your world, if you were a goblin?

You would observe a world filled with the ruins of fallen civilizations. You would see the crumbling remains of great buildings and structures that nobody knows how to build. You might see that these fallen civilizations transformed the land, making roads or canals or even altering entire ecosystems to suit their needs. There would be artifacts from these civilizations, strange items that nobody knows how to make. Sometimes nobody can even guess what they are meant to be used for.

If you were part of a tribe that was clever and curious enough to translate and read texts from these ruins, you might learn their history. You would know that, sometimes, a tribe of goblins would suddenly form a civilization, gain great wealth and power, and conquer and enslave all of the surrounding tribes. But then, over time, that civilization would, for some reason, become less capable. It would coast along, accomplishing little, feeding off the riches of its glory days, until some kind of shock like a natural disaster, resource shortage, or outside invasion would destroy it and leave nothing but ruins. 

If you were smart, you might wonder exactly why these great ancient civilizations were inevitably destroyed by trivial things, at a time when they had far more resources and power then they did when they were overcoming much harder obstacles, but you are probably not smart enough to ask questions like that.

If you were a goblin in the later years of one of these civilizations, what would you observe?

You would observe that your ancestors used long words you can barely understand, and sentences with grammar that you can barely parse. They would speak of concepts that mean little to you. They might be deeply concerned with things that seem bizarre or meaningless.

You would observe that goblins in other tribes outside your civilization can never seem to form or sustain a working civilization on their own, no matter how many resources or tools you give them.

You would observe your civilization slowly decaying. You would see that it takes your people a lot of time and money to do things that were once done swiftly and cheaply. You would observe that a lot of things seem to cost more, or are of worse quality. You would see things falling apart faster then they can be built or repaired.

You might observe different parts of your civilization decaying at different rates. If your civilization happens to have some kind of system that identifies the smarter goblins and collects them in special places, then those special places will function well, and may even advance, but the places that you took the smart goblins from will inevitably regress into barbarism in a generation or two.

Different factions in your civilization would all blame different things for the decay. If you were smart, you would notice that each faction blames the thing that it has always blamed for everything bad, and recommends solutions that would increase the wealth and social status of its members. But you are probably not that smart, so you accept your faction's explanation, and believe that things will be good again as soon as you gain power over the other faction and make them do what you say.

Monday, March 13, 2017

Intellectual Property Law: Costs vs Benefits

Nothing I say here is original; it is heavily influenced by Tabarrok's thinking on the matter. This post started as a Facebook comment in response to a friend's question; I am putting it here so I will be able to find it again and refer to it easily.

The question was "To what extent should governments try to enforce intellectual property rights? ... How would we determine, in principle, whether intellectual property laws are a good idea for governments to keep enforcing? (And, what's your best guess as to what we should be doing right now?)"

The default Economist answer to any question of the form 'To what extent should governments do X?' is always 'Until the marginal costs of doing more X start to exceed the marginal benefits.' I am only half joking when I say that the procedure for getting an Economics PhD is for people to drill the decision procedure 'Do things until marginal costs exceed marginal benefits, then stop.' into your head until it would be the first thing you mumble if you got dragged out of bed in the middle of the night and asked a question of this form.

The marginal cost of each additional year of intellectual property protection is the monopoly deadweight loss, plus the loss of knowledge diffusion and the innovations that would have been created based on the thing if it was a public good. This latter term is often dramatically underestimated. This marginal cost is probably roughly constant over time for most things, but will increase over time for important foundational innovations.

The marginal benefit is the incentive to innovate and create the thing that is generated by the difference between the monopoly profits under the government-IP system and the state of nature where people keep things hidden. Note that monopoly profits are not the same as the deadweight loss; they are just a transfer and therefore not a social cost. This marginal benefit decreases over time; older IP is almost always less valuable to a monopolist because substitutes will be developed.

Finding the exact point where marginal benefit equals marginal cost is always tricky in practice, but this gives us a few obvious guidelines:

Different types of IP should have different types of IP laws. IP law should be based on how expensive something is to create, how likely people are to create it anyway for non-profit motives, what the expected profit flow looks like, and how valuable it would be in the public domain.

Some things should not get any IP protection. Others should get a lot.

If 90% of the profits from a thing come in the opening weekend, and it takes more than a week to copy it, there is no need for IP. (If copying is instant, the proper length for IP protection is a few weeks.)

If we routinely see individuals producing a thing without any expectation of payment, and/or producing it is cheap and brings status rewards, there should be no IP protection. All it does is further reward people who won the attention/status lottery.

The 20-year patent for expensively researched industrial processes seems like a decent balance when the market for the product is small; larger markets and easier scale-up imply shorter optimal patent terms.

Further exploration can be left as an exercise for the reader.

Thursday, March 2, 2017

The Story of a Lucky Economist

The career guide 80,000 Hours highly recommends getting an Economics PhD. I completely agree with their assessment. If you have an academic personality and a decent work ethic, and are lucky enough to be born with high analytical intelligence, there are few better options. If you identity as an Effective Altruist and want to make the world a better place, becoming an economist (and doing anything other than teach at a low-ranked school) is an excellent career choice for earning to give (collecting a good paycheck and giving a lot to charity) and, if you get lucky, can also be good for direct impact (personally making the world a better place.)

I got lucky. This is the story of my direct impact as an economist working for a US federal government agency. Please do not expect that this is likely to happen to you if you become an economist and start to work for a government. But something like it might happen.

Before I tell my story, I will tell the story of C, one of the veteran economists in our office. When she was interviewing me for the job and telling me about the work that they do, she told the story of her direct impact, which I will heavily paraphrase.

When the agency produces a regulation, they get scientists, lawyers, and policy experts together in a room to write the rule. If the team is well-managed and/or the economists are good at making friends, there will also be an economist in the room.

C was in the room when they were writing rules for irrigation water quality. There was some concern that pathogens in the irrigation water would contaminate the final food, so the room's consensus was that they would require all irrigation water to meet the same standards as drinking water. Then C started asking questions.

C: "Are we going to apply this rule to drip irrigation?"
Someone: "Yes, I don't see why not."
C: "Are we going to ban people from fertilizing plants with manure?"
Someone: "No, of course not, it is safe under the right conditions."
C: "So, you are writing a rule that would force people to put drinking water on manure?"

Everyone else looked at each other, and then realized that they should loosen the water quality standards under certain conditions.

C explained that one of our main jobs as agency economists was to think about the big picture and be the voice of 'common sense' in the room. It is surprisingly difficult to find people who can do this, and the 'PhD Economist' credential is a signal that you might be the kind of person who can escape groupthink and see neglected but important side effects and chains of causality.

This story was perhaps the highlight of her 20-year career in the agency. She probably saved farmers tens of millions of dollars, in total. In that one conversation, she repaid the country the value of her entire career's salary. This is what you can expect your direct impact to be as an agency economist: find ways to save people money and make life a little less difficult for them, while still accomplishing the mission of the agency. This is a good life and a noble calling, for those who think about efficiency and the big picture, and it is unreasonable to expect more.

I got more.

One day, the boss asked me to do a quick estimate of the costs and benefits of removing the GRAS (generally recognized as safe) status of PHOs (partially hydrogenated oils). You may know this as the 'trans fat ban'. Because this was technically just an exercise of existing authority and not a new regulation, it did not need an official economic analysis, but management thought it would be good to have some idea of the numbers before moving forward.

I did not know what to expect at first, but after I did the research, I found that the numbers would be huge. The costs would be measured in billions, and the lives saved would be measured in tens of thousands. There was very strong scientific evidence that trans fats are uniquely toxic among all commonly used food additives, and banning them would be the biggest public health action on decades. As a conservative estimate, they were killing eight Americans every day.

Once I realized this, I was consumed with a need to do everything I could to make the action publish as soon as possible, while making sure that it and my analysis would survive any legal challenge. I was shocked that few others in the agency realized how big and how important this was. Most of management just saw it as another thing on a big list of things that the agency was doing, and either did not know or care about the numbers involved. There was nothing I could do to push it out faster, aside from explaining the rule's huge positive effects to everyone who would find me credible, which I did. But talk like that is common in the agency, because everybody wants to push out their favorite regulation.

But I could make a real positive impact by making absolutely sure that I was not delaying the action. I studied the process, identified the likely times when things would stop because people were waiting for economic numbers, and prepared for those times. I made sure that all of my spreadsheets were flexible and complete enough to, with a few minutes' work, accept any variation in inputs and produce new output tables. I occasionally wrote several versions of the analysis ahead of time while waiting for management decisions, one for each plausible decision, so that I could turn it around in hours of being notified of the decision. When necessary, I worked lots of overtime to get things out the next day.

Basically, I identified the times when I was on the critical path, and did everything possible to shorten the time on that critical path. I do not know how successful I was. But even if I got the rule out one week faster, I saved over 50 lives. And if my analysis and contributions made the rule 0.1% more likely to survive a court challenge, then I saved over 25 lives. Direct impact numbers can get scary large when dealing with major public health initiatives that cause a single-digit percentage change in the heart attack rates of a country of 300 million.

However, depending on how you choose to interpret the Value of a Statistical Life, my second direct impact may have actually saved more lives.

Congress had passed a law requiring the agency to pass several major new regulations. The agency scheduled the regulations, starting with the most important ones that they had the most knowledge of. For the rest, they gathered information and started talking to a lot of affected producers to figure out what to do. One part of this law was a mandate to write rules about a thing that no agency in the world had ever dealt with before, and this was scheduled to be last, after a lot of research and discussion.

However, a consumer group sued the agency to publish the regulations faster. The agency lost, and was handed a court-ordered deadline for all rules, including the novel rule.

I was pulled off all other projects and assigned to this rule full-time. There were about a dozen of us in the room writing the first draft over a period of about two weeks. People would propose ideas and ask me questions about likely effects. I would poke around on the Internet and/or our internal databases for about an hour, crunch some numbers, and then give them a rough estimate.

It quickly became apparent that, even though everyone was trying to write the rule to cause as little burden as possible, following our congressional mandate would cost a lot of money.

Most rules exempt very small businesses from most or all requirements. In past rules, 'very small business' was typically defined as having up to $250,000 to $1 million in annual sales. I suggested raising the 'very small business' threshold and gave a menu of options, with cost savings and market coverage for a variety of cutoffs from $1 million up to $50 million. I went that high not because I expected anyone to choose that option, but because I understand framing and anchoring.

The team chose a threshold of $10 million in annual sales, which would save about $150 million a year compared to a $1 million cutoff, and still cover over 97% of the market by sales volume. After working to find plausible legal and scientific reasons that agency lawyers could use to justify this precedent-breaking number in court if necessary, the team agreed to propose the change to management, and management agreed.

My understanding of the power-law distribution of firm sizes, and my ability to communicate its practical effects, had saved small businesses about $150 million a year in compliance costs.

The Value of a Statistical Life in the USA is about $10 million. People will, on average, spend about $10,000 to reduce their chances of dying by one in a thousand. By saving people $150 million a year in compliance costs, I gave them enough resources to invest in things that are expected to save 15 lives a year. Assuming that the rule lasts for about 30 years before being rewritten, I saved the statistical equivalent of 450 lives with a couple insights and a few days of work.

Of course, I also caused a small increase in the chance that a very unlikely but horrible thing would happen. The increased chance, multiplied by the base rate and the expected casualties and other economic costs, means that I caused the statistical equivalent of about 50 deaths by encouraging the team to exempt small producers. So I can 'only' claim about 400 lives saved on net.

I do not expect anything like this to happen again in my career. I was lucky enough to be in the right place at the right time, twice. Laws and regulations with that much impact only come along about once a decade on average. For the rest of my life, I will be doing much smaller things.

The main message of this story is that the direct impact of a government economist is extremely high-variance. Most of the time, you will do nothing to make the world better. Occasionally, you will do something that prevents a few million dollars from being wasted. And if you get very very lucky, you might do something that saves the statistical equivalent of hundreds of lives.

Technical Appendix for Effective Altruists

If you are a PhD economist working for the US federal government, you will typically start at GS-12 and quickly work your way up to GS-14 (currently about $120,000 a year in the DC area). Then you will stay at GS-14 for the rest of your career unless you work your way up to senior management. This is less than private industry and consulting, although not that much less when you consider the value of benefits, and is much less stressful and time-consuming. I have an excellent lifestyle, and earn enough to painlessly give over $30,000 a year to charity.

If you are the kind of person who wants to work 80 hours a week and make a name for yourself, earn lots of money, and/or have more direct impact, then I still recommend starting your career as an agency economist. Government work does not force you to work as hard as industry or academia just to stay afloat. You will have extra time and energy in your week, so you can, if you choose, use that time for self-directed career advancement and make your agency job an excellent springboard to many different high-flying careers. I personally have no plans of exercising this option, but I know many people who have. Some publish lots of papers, others network and get promoted in the government, and others leave for high-paying mid-career private-sector jobs.

Everything I have discussed is only relevant to agencies where economists are actually involved in making policy decisions or new regulations. There are some places in government that are just research shops, pushing out academic publications. Avoid them. Other places have economists churn out standard reports and analyses for people to use. I do not know how impactful this is, but it is probably still a good job.

I have been able to use my knowledge to help with Effective Altruism Policy Analytics, and to give advice to many people in the LW/EA community. If you have further questions, feel free to ask in the comments here or in another location. If you are seriously considering this career, I am available to talk. I am also available as a dissertation advisor for any EA-affiliated PhD students aiming for an agency career (having advisors outside your school is an excellent signal, and I have worked on government hiring committees and know what they look for in job market papers).

Wednesday, November 9, 2016

Information Processing

The thing that truly terrifies me about the election is that it reveals that our best methods of understanding and predicting reality are fundamentally inadequate. Brexit was the first hint, and this is the confirmation.

Trump's victory was not due to random chance, or weather, or some last-minute surprise. Nothing really changed in the last week. The depth of his popular support was a fact of reality days before the election, and a competent information-processing system would have learned of it.

It is a known fact among social scientists that polling always underestimates support for things that are seen as violating social norms. I assumed that the polls or models corrected for this somehow. They did not.

The polling was wrong. All of it. The experts were wrong. All of them. (Trump supporters predicting victory don't count, because partisans always believe they will win. And a few people online don't count, either, because you can find any claim online.) But what truly scares me is that the markets were wrong. All of them.

Before the election, I knew of the polling bias issue and worried that Trump had more popular support than the polls showed. But I did not place any bets, because I assumed that markets had already priced in this information. They had not. In the early afternoon of election day, election prediction markets had Trump at about 25%, and the stock and foreign exchange markets were assuming a Clinton victory with high probability.

Hedge funds spent millions of dollars on private polling and models to gain information that would allow them to beat the stock and forex markets. This is exactly how markets are supposed to reveal information. They give people billion-dollar incentives to obtain information, and then when people trade on that information, the price moves to reflect the private knowledge.

But even with the best performance incentives known to humanity, and all of the tools of social science and modern technology at their disposal, the hedge funds failed to obtain a basic fact about reality. If we cannot obtain the truth about people's feelings in a situation where the stakes are this high and information is so readily available, how can we hope to predict how our actions and policies will affect people's subjective quality of life?

Friday, December 11, 2015

AI Safety Optimal Investment

It occurred to me today that I have never seen any attempt at calculating how much society should be willing to spend to prevent an AI Catastrophe. I am pretty good at this kind of thing, so here's a quick Fermi estimate:

Conceptually, this is a lot like buying a life insurance policy for the human race. There is some probability of a catastrophe, so the annual amount we should be willing to pay for insurance is the cost of the catastrophe times its annual probability.

First, we need the dollar value of the human race. This is relatively easy. We have revealed-preference estimates for the value of a flourishing human life in a rich society, so that should be about right for estimating the value people place on a good personal and genetic future, assuming a future where everyone's quality of life is about as good as the average citizen of the US in the early 21st century.

The Value of a Statistical Life in the US is about $10 million, or $1x10^7. Over the relevant time frame in which an AI catastrophe is likely, world population will likely be stabilizing at about 10 billion, or 1x10^10. So the value of the human race, in current US dollars, is about $1x10^17.

Now we find the probability of a catastrophe. For convenience, I use numbers presented in the article linked above, which is a pretty good summary of the current understanding of the field.

Artificial Superintelligence is unlikely to happen before 2025, but almost certain to arrive by 2125. So each year in that timeframe, there is a 1 in 100, or 1x10^-2, chance of ASI.

Now the numbers get very speculative, so in the grand tradition of Fermi estimates I will simply round everything to the nearest power of ten and say that there is a 10% chance that, when ASI develops, it will, without any safeguards, destroy the human race.

So, given a 1x10^-3 annual chance of losing something valued at $1x10^17, we should be willing to spend $1x10^14 to prevent that possibility.

$100 trillion a year is a lot of money. That is almost exactly equal to the total value of the entire world economy. It tells us that any policy short of destroying the entire world economy and/or killing millions of people would be worth doing if it prevented an ASI catastrophe with 100% certainty [Edit: assuming that the policy does not make other existential risks significantly more likely]. For example, a total ban on producing or maintaining any kind of computer, combined with a credible threat to nuke anyone who violates this ban, is a legitimate policy option with a positive expected payoff.

More realistically, we will continue on our current technological course and try to make things safer. Even if AI research only had a 1 in 100 chance of guaranteeing Friendly AI, we should be willing to pay a trillion dollars a year for it.

Monday, July 27, 2015

IBM Watson Personality Insights

IBM claims that it can provide insights to my personality by analyzing the text I write. Let's test this by separately feeding it the text of my last few blog posts and seeing what it says about each one:

Organ Donor Safety Exemptions:

You are shrewd, unconventional and can be perceived as indirect.

You are imaginative: you have a wild imagination. You are laid-back: you appreciate a relaxed pace in life. And you are intermittent: you have a hard time sticking with difficult tasks for a long period of time.

Your choices are driven by a desire for prestige.

You are relatively unconcerned with tradition: you care more about making your own path than following what others have done. You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you.

Well-Being Analysis:

You are shrewd, inner-directed and can be perceived as indirect.

You are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys. You are intermittent: you have a hard time sticking with difficult tasks for a long period of time. And you are imaginative: you have a wild imagination.

Your choices are driven by a desire for prestige.

You are relatively unconcerned with both taking pleasure in life and tradition. You prefer activities with a purpose greater than just personal enjoyment. And you care more about making your own path than following what others have done.

So far, pretty consistent. But both of those blog posts were technical academic analysis. What happens when I give it a first-person account of a more emotional experience?

Chen Guangcheng:

You are social, boisterous and unconventional.

You are empathetic: you feel what others feel and are compassionate towards them. You are assertive: you tend to speak up and take charge of situations, and you are comfortable leading groups. And you are confident: you are hard to embarrass and are self-confident most of the time.

Your choices are driven by a desire for efficiency.

You are relatively unconcerned with tradition: you care more about making your own path than following what others have done. You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you.

That is a big change in the 'you are' and 'driven by' lines, and the first paragraph is entirely different. The only commonality is the 'unconcerned with tradition' and 'helping others' parts of the last paragraph.

Now, what does it think about my account of going out and helping people who got their cars stuck in the snow?

Car Shoving:

You are heartfelt.

You are empathetic: you feel what others feel and are compassionate towards them. You are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys. And you are calm-seeking: you prefer activities that are quiet, calm, and safe.

Your choices are driven by a desire for well-being.

You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you. You are relatively unconcerned with tradition: you care more about making your own path than following what others have done.

Again, it says something almost completely different. It is interesting to note that the personality analysis for the last two are backwards. I think that this paragraph describes the Me who attended the Chen Guangcheng talk, and the previous one describes the Me who went out to shove cars. It is particularly funny that it reacts to my description of shoving cars around in hilly slippery roads by saying 'you prefer activities that are quiet, calm, and safe'.

Now we go back to a more a more analytical post, but a different kind of analysis:

Media Musings:

You are shrewd, somewhat inconsiderate and can be perceived as indirect.

You are laid-back: you appreciate a relaxed pace in life. You are carefree: you do what you want, disregarding rules and obligations. And you are imaginative: you have a wild imagination.

Your choices are driven by a desire for efficiency.

You consider both independence and taking pleasure in life to guide a large part of what you do. You like to set your own goals to decide how to best achieve them. And you are highly motivated to enjoy life to its fullest.

The first line looks like its reaction to my other bits of analysis, and the 'desire for efficiency' is a repeat, but the rest is mostly things it has not said about me before. What might it say next?

Confusing Social Norms

You are a bit inconsiderate, somewhat critical and excitable.

You are melancholy: you think quite often about the things you are unhappy about. You are intermittent: you have a hard time sticking with difficult tasks for a long period of time. And you are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys.

Your choices are driven by a desire for connectedness.

You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you. You are relatively unconcerned with taking pleasure in life: you prefer activities with a purpose greater than just personal enjoyment.

Some new and different stuff, some repeats. That post was more of an expression of confusion and questioning than an account or analysis. Let's chalk that one down to the small sample size, it was just 360 words after I chopped out the quotes and links.

The next one should be more informative, as it combines analysis and first-person accounts, and talks about something that is more connected to my identity:

Lego in Asia

You are inner-directed and skeptical.

You are calm-seeking: you prefer activities that are quiet, calm, and safe. You are empathetic: you feel what others feel and are compassionate towards them. And you are deliberate: you carefully think through decisions before making them.

Your choices are driven by a desire for prestige.

You are relatively unconcerned with both taking pleasure in life and tradition. You prefer activities with a purpose greater than just personal enjoyment. And you care more about making your own path than following what others have done.

The overall analysis sounds familiar and kind of accurate, but it jumps out at me that it says 'you are skeptical' to a blog post that I would characterize as being filled with the wonder of shared experience and progress and connectedness. It has not said that about any other post.

Cargo Cult Crafts:

You are excitable.

You are laid-back: you appreciate a relaxed pace in life. You are empathetic: you feel what others feel and are compassionate towards them. And you are calm-seeking: you prefer activities that are quiet, calm, and safe.

Your choices are driven by a desire for well-being.

You are relatively unconcerned with tradition: you care more about making your own path than following what others have done. You consider independence to guide a large part of what you do: you like to set your own goals to decide how to best achieve them.

Okay, that one just confused the algorithm. It says that I am both excitable and laid-back, that I 'prefer activities that are quiet, calm, and safe' when the entire focus of the blog post is about how I like to hack at real pumpkins with real knives and learn by taking risks, and that I am 'unconcerned with tradition' when I defend the traditions of my childhood against a shallow commercial substitute.

Its reaction to Important Information, Important Caveat is about the same as its reaction to most analysis posts, so no sense repeating it. But I want to feed it one last post, a rant, to see how it reacts:

Whack Rant

You are a bit compulsive, somewhat critical and skeptical.

You are intermittent: you have a hard time sticking with difficult tasks for a long period of time. You are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys. And you are melancholy: you think quite often about the things you are unhappy about.

Your choices are driven by a desire for efficiency.

You consider achieving success to guide a large part of what you do: you seek out opportunities to improve yourself and demonstrate that you are a capable person. You are relatively unconcerned with tradition: you care more about making your own path than following what others have done.

That is actually about how you would expect someone to react to an intellectual takedown of something I hated.

Overall, it is pretty clear that the system does not have any deep personality insights. It is reacting almost entirely to the rhetorical choices I make for each particular post. Anyone who has any writing skill or understanding of rhetoric knows that you should use a different voice, tone, and approach in different situations.

The only thing consistently output, in eight of ten writing samples, was that I am relatively unconcerned with tradition. Nothing else showed up in more than half of the results. This probably reflects my consistent use of scientific and analytical language.

Five of the results claimed that 'You are intermittent: you have a hard time sticking with difficult tasks for a long period of time.' and another five claimed 'You are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys.' I consider both of these claims to be dubious, and I am not really sure where they came from.

Four of the results claimed that I am shrewd, empathetic, driven by prestige, guided by helping others, and/or unconcerned with taking pleasure in life. Again, that seems kind of random and not really connected to who I am.

As a final bit of fun, let me plug the output of the system into its input, so we can see what it says about itself:

You are confident and generous.

You are assertive: you tend to speak up and take charge of situations, and you are comfortable leading groups. You are calm under pressure: you handle unexpected events calmly and effectively. And you are respectful of authority: you prefer following with tradition in order to maintain a sense of stability.

Experiences that give a sense of well-being hold some appeal to you.

You are relatively unconcerned with tradition: you care more about making your own path than following what others have done. You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you.