Monday, September 9, 2024

Mortality Cost of Taxation

Here are some interesting questions:
1) If Japan raises $1 billion by increasing taxes on investment income, how many people will die as a result?
2) If Kenya raises $1 billion with a bundle of sales taxes, how many people will die as a result?
3) If the USA raises $1 billion with a tariff on Chinese goods, how many people will die as a result?
(If it is not obvious to you that these numbers are all more than zero, consider the basic facts that poverty kills people, and taxes take money from people. I like telling people that cost-benefit analysis is how to do "primum non nocere" for public health policy.)

In a competent civilization, there would be an academic sub-discipline devoted to these Mortality Cost of Taxation (MCT) questions. University professors would research MCT methodology and develop MCT best practices, and there would be a neutral MCT agency in all G20 countries that estimated the mortality effect of every tax policy under consideration. Then people would estimate how many lives would be saved by spending the tax money, and compare. (In reality, they would use a metric like the DALY, to account for both quality and quantity of life lost, but I am focusing on deaths in this discussion to keep things simple and vivid.)

In the world we live in, this academic discipline only exists in my head (as far as I know, and I would love to learn otherwise), and I suck at it. The best answer I can give is:
1) Ask a taxation or tariff economist how much the policy will impact GDP.
2) Divide that number by the country's GNI per capita.
3) Divide that number by 100.

So, in Japan (GNI/cap $40k), every $4 million in economic impact will kill someone, and in Kenya (GNI/cap $2k), every $200,000 in economic impact will kill someone.

My answer assumes that all taxation is basically the same, and assumes that the MCT is roughly twice the Value of Statistical Life (VSL). This is the best I can do, and it probably isn't wrong by more than an order of magnitude. 

VSL estimates come from looking at the wage differences between similar jobs with different occupational fatality rates, for example lumberjacks and gardeners. In the USA, for every $13 million or so in increased wages among we-assume-otherwise-identical jobs, one person dies on the job. In Kenya, many more people will die for $13M in increased wages.

If everyone responded to the tax increase by working a more dangerous job, then the VSL would be the correct estimate of the MCT. But this is obviously impossible at a societal level; the composition of jobs is roughly fixed and the tax change does not alter occupational safety standards.

If everyone was perfectly rational and made consistent tradeoffs between money and mortality risk, then the VSL would be the true estimate of the effects of people reducing their consumption in response to a tax increase. But we know that the all-cause mortality rate among people with dangerous professions is higher, so they cannot be turning their extra income into lower risk via consumption at the VSL rate.

For this reason, and for various other handwavey reasons I can't formally justify (e.g. budget constraints, short-run elasticities being lower, behavioral-economics effects, and vaguely gesturing at the literature on how people respond to income shocks), I assume that broad-based income loss from taxation or tariffs turns into mortality at roughly twice the VSL rate. Although I still use the VSL in all my work, because that is the current methodological standard and I can't cite anything better.

If you think you can do better, or know someone who has done better, or would like to collaborate to find ways to do better, please let me know.

PS Obviously the story becomes different with more targeted taxes. Taxing rich people causes less mortality (assuming that the tax incidence actually falls on them). Taxing harmful substances or things with negative externalities could have zero or even negative MCT, if it doesn't move much activity into a violent black market. I would assume that taxing positional goods causes roughly zero MCT, but there may be weird side effects I am not considering.

PPS It's a weird feeling to be so bad at something, and also the best in the world, because nobody else is even trying. I have looked, and can't find any discussion of this. Maybe I'm just missing it. But in all the discussion of VSLs and how to use them and how to derive them, nobody seems to justify the methodology by pointing to the harm of taxation, or to make the obvious point that taxation kills people so it would be nice to know if your policy is saving more lives than it kills.

Tuesday, October 10, 2023

Notes on Gaza

I want to start by making it absolutely clear that I am on Israel's side. The people of Israel deserve to live in peace and security, and achieving this should be our main concern.

I'm writing this because it's epistemologically virtuous to make public predictions. I put roughly 80% confidence on any statement that I make here. If I'm wrong about this, and I hope that I am, I will update my worldview accordingly.

This attack has been called Israel's equivalent of 9/11. I agree. They were both horrible terrorist attacks that wounded the soul of a nation. And I fear that the government of Israel is about to make exactly the same mistake that the US government made after 9/11. 

The correct response to 9/11 would have been to lock cockpit doors, invest in better policing and intelligence, use some diplomacy to reduce the amount of money going to Wahabist training centers, and do nothing else. Instead the USA started a punitive military action without a real plan. That military action killed more of our people than the actual terrorist attack, burned three trillion dollars (which using standard VSL calculations is the equivalent of killing 300,000 Americans), seriously harmed our moral standing in the world, and did very little to protect us or prevent future attacks. (I credit the lack of subsequent attacks to better intelligence, better police work within our borders, and a few targeted assassinations.)

I think that the correct response to the Gaza terror attack would be to upgrade air defense systems, install drones or turrets that are capable of shooting down paragliders, strengthen the border fences, maybe encourage civilians to be better-armed, and do nothing else. But the government is making an emotional decision to go in guns blazing without good intel and without a plan.

I think that going into Gaza is going to stick the Israeli army into a sausage grinder, like the US army in Iraq or the Russian army in Ukraine. I predict that more Israelis will die as a result of reacting to the terrorist attack than were killed in the actual attack, and that military intervention will do little to improve their long-term security.

The hostages are a much harder question, but the fact that Hamas took so many of them is an indication that previous policy has failed. Governments need to make a credible promise to never negotiate with hostage takers. I would prefer to live under a government that never paid ransom, but instead worked to find and carpet bomb any facility where hostages are being held. This policy would dramatically reduce the chances that I was taken as a hostage, and even if I was, I would much rather die in an airstrike than be used to give a billion dollars in funding to a terrorist group that would use that to kill more people.

Without any moral condemnation, and with a full understanding of why they chose to do this, I will point out that the past Israeli blockade of the Gaza strip has been a failure. It did not prevent attacks, and gave people motivation to launch them. Punitive collective punishment does not work, and it breeds more terrorists. The economic situation in Gaza has created a situation where it's very easy to recruit people into a terror group.

Israel has chosen to shut off the fuel and electricity and other essentials of modern life to a group of 2 million innocent people and a few thousand terrorists. This policy will create more terrorists faster than Israel can kill them. If my little sister froze to death because of a foreign government shut off heating oil, and threatening to bomb any aid convoys, I would want to go out and start killing them, and there are a lot of young men in the Gaza strip who will react the same way.

The correct long-term solution would be to give Gaza to Egypt. Cede the territory, let the city be part of Egypt, integrated into the Egyptian economy with free movement of people, and fully under control of the Egyptian police and security forces. Let young men leave and work elsewhere in Egypt, so you don't have the 90% youth unemployment that always breeds violent rebellion. This would not reduce the risk of Gaza-originating terrorist attacks to zero, but it would make Israel a lot safer than any other policy.

The USA and European countries should negotiate this transfer, if necessary paying Egypt some money to compensate them for the costs of integration.


Thursday, February 17, 2022

Why you are psychologically screwed up

Epistemic status: deliberately written in simplified tale/myth mode. I could go off on so many tangents on the actual complexity and nuance of everything, but I wanted to keep it tight. I encourage you to 'unpack' the points by comparing them to things you know, and/or framing what I say in different ways, to see if the main theme/message makes any sense to you.


1: Monkeys Become Ants

In the beginning there were monkeys. Individuals had their own models of physical reality, and desired things, and took actions based on their desires and reality models. This often involved fighting the other monkeys to get something. Life was solitary, poor, nasty, brutish, and short.

And then something happened. Some monkeys became more cooperative and social. They formed packs, and the packs were more successful than the individuals. The monkeys in packs coordinated their actions and learned from each other. Homo sapiens crushed its competition and filled the world.

Making the pack work required a lot of psychological engineering. People had to be programmed to replace their own desires and world models with the shared desires and models of the pack, at least when it was necessary for the pack to cooperate and do something. People who were unable or unwilling to do this were selected against.

Over time, cultures and religions and ideologies learned new tricks to enable cooperation at larger scales. They became more aggressive about replacing the desires and world-models of people with those of the group. People were acculturated to deeply internalize the group's desires and have faith in the group's description of reality, such that going against either was somewhere between morally abhorrent and unthinkable.

The groups that were capable of generating cooperation/brainwashing at scale crushed the ones that had less ability to do this. Eventually giant hiveminds formed, after figuring out how to brainwash millions of people into useful cooperation. They came to rule the world, and people who were unable or unwilling to be assimilated into these mega-hiveminds were selected against.

In short, monkeys were psychologically manipulated into sometimes acting like ants. It didn't matter how traumatizing this process was, because the hive minds that did it won the fight against those who did not, and filled the world with their genes and memes.

Because of this, most people usually operate on a level of social reality, rather than physical reality. The hivemind doesn't want you to have your own model of the physical world, unless you are one of the few ordained specialists in having that job. Having your own model is a rebellion, a rejection of the cooperation protocol, and marks you out as a cancerous or treasonous element.

Similarly, having any desires not programmed or approved by the hive marks you as a threat. You are rebelling by doing the abhorrent/unthinkable thing. If the hive learns about this, you are in trouble. If you make it common knowledge that you have this, and are proud and open about it, such that you are making a public challenge to the cooperation protocol, you are in very deep trouble.

2. Evolved Hypocrisy

But humans did not become completely eusocial; we continued to reproduce as individuals. And there was plenty of opportunity to steal from the hive for personal gain, for those clever enough to do so without getting caught.

In addition, being one of the people with power at the top of society requires you to act as an individual monkey. You have to see the world as it is, and take action and make actual decisions. Hiveminds that completely destroyed the ability of their leaders to act as agents did not last very long. In addition to the leaders, the successful hive minds also allow a small number of artists or inventors who think creatively, if they prove their loyalty and quality in other ways.

So there was always a trade-off in incentives; a complicated tension in strategies. Someone who was too individual was rejected by the hive and exiled or killed for being a criminal or misfit. But someone who was too much of a drone, who sacrificed too much of themselves to the hive, also failed to reproduce. The only people who survived were the ones who managed to resolve this tension and manage the trade-offs appropriately. Everyone else was selected against. 

And the ones that really prospered were the 'sleeper agents' who rose through the ranks, or gained acclaim, while seeming to be very loyal to the hive and following its commands, and once they got enough esteem and resources, acted as a self-interested monkey (ideally while making the hive more powerful, but this is at most a secondary goal).

This manifests itself in several ways, such as widespread hypocrisy, self-deception, and hidden motivessocial desirability bias. But the main evidence is Near/Far Thinking. Roughly, far mode is the hive's programming, and near mode is taking care of yourself and your genes. People who cannot manage this switch appropriately will be selected against, and one of the better predictors of being able to do this fluently is your brain lying to your conscious awareness about what you are doing. Another piece of evidence is how easily and thoroughly organizations or subcultures can suborn individuals and turn them into drones who only value the hive.

So, for hundreds of generations, people were selected for their ability to be hypocritical, play social games, and selectively violate rules when necessary while pretending to follow these rules in public. 

They were also selected for their ability to deploy appropriate strategies. Being more monkey-like (being an agent with unique goals and ideas) is the high-variance play. If you fail because you are not good enough, you're a misfit and criminal and get eliminated. But if you succeed, you rise to the top. Acting like an ant, maintaining pious or bourgeois respectability and following the rules for a non-elite existence, was usually the safe strategy if you had the ability to follow the rules and be productive, but not the skills to play the game of thrones.

3: Mythologies of Desireless Reaction

All of this has been happening for thousands of years. But recently, the hive's programming changed in ways that further crush individual desire and agency, disrupting the balance that previously existed and causing even more psychological damage.

One of the main tools that hive minds use to acculturate and brainwash people is narrative fiction. The widely shared foundational stories and myths of their society tell people what they should desire and how they should behave. In this way people are deeply programmed at an early age to act in a certain way.

In the past, most stories were heavily gender-coded. Boys were told to act a certain way, which usually involved having desires and being actors or subjects, but in a responsible and controlled way. Girls were told a different set of stories, which usually involved following the rules and being passive objects of desire. This was bad and hurt people, but as I've said before, the systems that made themselves more powerful were selected for, regardless of the harm to individuals, and systems that made more of their people more ant-like had a competitive advantage.

For most of history, whatever the medium, from Odysseus to Tom Sawyer to pulp magazines, the male stories told of heroes who had desires and took action to achieve those desires, but who did so in a virtuous way, while respecting the (more important) rules of society. In this way they modeled good behavior, teaching people to strive for things without hurting others. In this fiction, the importance of following certain social rules was emphasized, often much more so than in modern fiction, but the heroes were allowed/encouraged to follow their goals as well.

And then The Lord of the Rings was published, and everything changed.

This was a huge cultural event. It spread widely. More and more content creators started to imitate it in various ways, consciously or not. More and more narrative fiction published since then has followed its basic pattern:

The hero does not have personal desires. The hero does not wish to do anything other than live a normal boring life according to society's rules. And yet, they are forced to take action by an outside evil force that wishes to impose a change on their society. For some reason, the hero is chosen, through no merit or virtue or action of their own, to be the one person who must protect their society. But they are quite clearly an object of the story or its situation, rather than a subject.

Star Wars mostly follows this pattern. The hero is living on a farm, evil people force him into action, and he acts for the benefit of others with magic powers that he did nothing to earn or deserve. Harry Potter follows this pattern even more so, as does basically every superhero story.

In the very earliest superhero stories, they were, somewhat, proactive agents of change. They cleaned up the criminals that were the normal part of their society. But nowadays, superheroes are always reacting to some kind of super villain who has an evil plot to change the world. Their success is defined by returning to the status quo. (Avengers: Endgame is especially notable for this; they have unlimited power and just use it to restore what was lost. Nobody even mentions improving things; the least they could have done is to get rid of Malaria, AIDS, and TB in the process.)

In all of these works, which have become the de facto civic religion of our society, the heroes are all fundamentally both reactive and reactionary. They're not trying to accomplish anything; all they're doing is stopping someone else from taking action. It's vetocracy as a foundational myth.

And so, through an incredibly powerful multi-sensory experience, repeated many times and in many variations in many narratives, all young children nowadays are trained to believe that the ideal behavior is to want nothing, to take no action, but to rise up in arms against anyone who would change the structure of your society. They are taught that taking initiative is bad, that researching technology is bad, that making changes is bad, and that all good people must resist these things, but of course you should not take the initiative to resist them, some situation outside you must force you to take action. We are brainwashed to want nothing more than to be a high-status protagonist in a story written by someone else.

And it is not just the fiction. Many other tools of social programming are also saying the same thing.

As you might suspect, this messes people up a lot. And as you should expect from knowing that your conscious brain is mainly a PR agent, people will not know that they are messed up, or why.

It's not a coincidence that our most successful people don't consume fiction, and that some of our more original and agent-like thinkers were socialized by old science fiction rather than modern narratives.

If you are trying to raise a child to be individual and sapient, to have its own desires and goals and to believe that it is okay to work to achieve those goals based on a personal model of the world, practically your only hope is to have no screens they can access, and keep a library of nonfiction scientific books and maybe some old science fiction and even older stories about responsible heroes with initiative.

4: The Over-Programmed People

Even with the change in social programming, most baseline humans are hard-wired with enough subconscious cynicism to figure things out, and enough hypocrisy to act for themselves in spite of the rules. Thousands of generations of evolution have shaped them to handle this situation. They act according to the rules and standards in public, while maintaining some capacity for private action that breaks the taboo against initiative. Usually they do this by arranging situations of plausible deniability (humor and irony are used heavily here), or simply breaking the rules after sending or receiving signals that the other person is willing to break the rules with them.

But some people don't figure out how to play these complicated games, and they suffer.

There's a behavior cluster that I've noticed that I don't think we have a name for. There is a lot of correlation between having this behavior and having scrupulosity or being a high-functioning autistic, although neither is quite the same thing. Basically the behavior cluster is acting too much like an ant and not enough like a monkey.

Even without the cynicism and hypocrisy, most neurotypicals have a defense against the anti-agent programming. They have a high baseline of selfish-monkey traits, which means that when society's behavioral programming forces or trains them to follow the rules and be obedient, they end up somewhere in the middle. They restrain their instincts for grasping desire when necessary, and follow them when appropriate.

But some people don't have this. They believe the rules, internalize them deeply, and follow them without adding in any monkey instinct. They think that if all they ever do is follow the rules properly, then they will be taken care of. They are too nice, too agreeable, too passive, too much like a drone. They share information instinctively freely and honestly, and jump to follow the commands of a perceived authority.

(Things can get especially rough for someone who believes the rules taught in school science classes about using evidence and the scientific method. They end up as too much of a drone, without significant individual desires other than the desire to be respected and accepted by the hive. But they are also too much of a monkey, because they insist on using their own mental model of reality rather than deferring to society's.)

The normal people reject the over-programmed people for being freaks, even though the over-programmed are doing exactly what everyone around them says everyone should be doing. This is partly because of an instinctive revulsion for anything that is seen as different, because most mutations are bad and people who made friends with mutants were usually selected against. But it is also true that, compared to a clever hypocrite, the over-programmed do not make good friends or partners. They're less likely to obtain resources, and less likely to preferentially give those resources to their friends. When you are an obedient drone, equally giving to every member of your society, and displaying no preferences or nepotism, nobody in your society has any incentive to be your friend.

It's likely that a lot of people reading this are over-programmed people. I personally am and/or was one. In the past, I would rage against the unfairness of the world. But the more I learn about the way the world works, and how selection has inevitably shaped human behavior, the more I learn to replace my moral indignation over other people's hypocrisy with an understanding that I'm just a dysfunctional mutant.

Epilogue: The Future of Humanity

Some people, who have internalized the old system of gendered stories that program people in different ways, call the result of the new programming 'the feminization of society'. A lot of men have noticed this process and complained about it, and feel nostalgic for the old model of civilization where only women were programmed to lack desire and agency.

I think that it was bad in the past when women were robbed of agency and desire, and that it is even more bad now when everyone is suffering this kind of brainwashing and programming. 

It is possible that civilization can only function when most people are turned into ants. But it seems to me that if you have more state capacity, and fair and automatic systems of law enforcement, then brainwashing people is much less necessary. They just need to be smart enough to understand that hurting people or stealing things will lead to automatic and overwhelming force directed against them.

However, it may not be possible to arrange this, and it may be true that the old gendered model is more sustainable than denying agency to everyone, by which I mean it is better able to project power and crush its opponents. If some non-Western civilization figures out how to combine technological competence, high economic productivity, good epistemology and strategic thinking, and an above-replacement fertility rate, then they will eventually win, and future societies and generations will see 'late Western civilization' as a warning sign or a failed experiment.

Monday, January 3, 2022

The Inaccuracies in Don't Look Up

An interesting irony of the movie Don't Look Up is that, while it is very good at understanding the media and manipulating public opinion, it is profoundly unconcerned with showing an accurate portrayal of the physical world, and it knows that its audience is equally unconcerned.

They might have done this intentionally, as a meta-level joke, or they might be assuming that the audience would consider these acceptable breaks from reality in a metaphorical or allegorical tale, or they might simply be ignorant. Either way, in order to have that discussion, you have to at least notice the inaccuracies. If you were not aware of these, i.e if your brain just ignored the lies someone was telling you because they were embedded in a narrative that validates your political beliefs, then you're part of the problem.

Here is a partial and incomplete list of some of the more obvious and egregious scientific blunders, roughly in the order that they appear. I am not an astronomer or physicist, I am just someone who knows a few basic facts about science (spoilers, obviously, although this is the kind of movie where they really do not matter):

1) It's standard practice among observatories to confirm sightings with lots of other observatories before going public or informing any authorities. It is extremely unlikely that a single team would bear the responsibility of informing the world, it would be a worldwide collective press release from all the observatories.
2) All of the nukes in the world would not be enough to meaningfully change the trajectory of a 6 km, hard, dense, rocky object a few months from impacting Earth. NASA does not have any contingency plans for an object of that size and approach.
3) Nobody ever launches that many rockets simultaneously from the same site. They would go up from different locations, or be launched in sequence to rendezvous in orbit.
4) I don't care how many rare earth metals are on that thing, there's no way it's worth more than the world's annual GDP. A large increase in available quantity would quickly drive the per-unit price down to almost nothing.
4a) Rare earth metals are not some magic crystal that makes wealth. They are like spices in a recipe. Yes it sucks if you run out of turmeric, but someone delivering a metric ton of the stuff to your kitchen will not give you any more actual food, or increase your ability to make it.
5) Rare earth metals aren't actually that rare, they're just a bit expensive to collect and refine. Retrieving them from an impact crater at the bottom of the Pacific Ocean would be vastly more expensive than opening up existing reserves that people aren't bothering with right now.
6) Breaking up the object and allowing each piece to hit the planet would do about as much damage as the whole thing hitting at once.
7) There's no possible way that you could see the comet while driving a car on a main street in a town. You can barely see any stars under streetlights because of the light pollution.
8) Russia is much more competent than that at launching nukes into space. So is China for that matter, and there's no reason they would need to do a single joint launch. They'd each launch things up separately and coordinate the impacts after they were in space.
9) Even if the Russian mission failed, it would not have failed in a nuclear fireball. Nuclear missiles don't work like that. They are specifically designed to not blow up accidentally in case of a launch vehicle misfire. They stay disarmed until you tell them to arm.
10) Big data analytics does not work that way. Just no. Not even a hypothetical super-intelligent AI could have accurately predicted the president's fate with the information available at that time.
11) A black man would probably have his own family to be with, rather than being a white man's lifestyle accessory.
12) Our civilization does not have the ability to put people in cryo-sleep, nor could we make a fully automated interstellar colony ship. These things are many decades away, not something a tech billionaire could have hidden away in a lab.
13) Objects blown into space by a planetary impact would no longer be recognizable.
14) The post-credit sequence basically invalidates the entire movie. If someone is capable of crawling out of the rubble of a NASA command center, which to be clear is not a hardened bunker like Cheyenne Mountain, and finding an atmosphere with breathable air and livable temperature, then millions of people could easily have survived the impact with a bit of prepping.

Tuesday, July 27, 2021

Algorithmic Feed Equals Publisher

This post is designed to advance and briefly defend the proposition that The US government should treat any company with an algorithmic feed as the publisher of all content delivered via that feed, meaning that they are legally liable for it.

There has been some talk recently about repealing or reforming Section 230 of the Communications Decency Act. A blanket repeal would be a disaster, basically destroying some of the best sites and communities on the Internet, like Wikipedia.

However, the impulse to do something is understandable. It has become more and more apparent that algorithmic feeds harm society in various ways. The AI is implicitly or explicitly designed to make you addicted to the platform by feeding you things that make you more emotionally reactive. This wastes time, harms mental health, and causes harmful content to proliferate.

I argue that it is intuitive, sensible, and would have good effects to treat the algorithmic feed as a publishing decision. Once a company filters and delivers content in an opaque way, it is no longer a neutral platform. It has made a choice to deliver some content over others. Therefore, it should be held liable for the content it chooses to deliver. This can be done by amending Section 230 to remove liability protections from any content delivered via a newsfeed or recommendation engine that the company controls.

This proposal would not harm blogging platforms, Wikipedia, Reddit, or any platform with user-generated content where people proactively choose what they want to see, or where the decision is made by a transparent system of upvotes and downvotes. And if you wanted to program your own algorithmic selection into an RSS reader that scrapes a lot of content, you could.

Social media companies would have to stop using algorithmic feeds or be buried in a storm of lawsuits. Some of them would turn into something that resembles newspapers or TV stations, with a curated feed of chosen content. Others would switch back to delivering all the content of people you select, and only them. A third option is to have people subscribe to a moderated topic and see things in that topic that have been upvoted the most. 

Probably most would use a mix of these strategies. After a brief flurry of reprogramming and experimentation, the disruption would be minimal and the companies would probably continue on with their current business model. Many of the harms of the current system would remain, but it would probably be less addictive, and probably harder for something to go viral.

Addendum: We would need to be sure to carefully and narrowly define what we mean by algorithmic feed, to make sure that spam filters on blog comments are not affected.

Wednesday, May 12, 2021

Unique Entity Ethics Solves the Repugnant Conclusion

I can't take credit for this idea, it has been 'in the water' of the EA/rationalist community for a while, for example in Answer to Job. It is the kind of thing I tend to assume that everyone in the community understands or knows, and yet people constantly seem surprised when I explain it. And apparently the repugnant conclusion is still considered an 'unsolved' problem by the philosophical community.

The key insights are:
1) Multiple identical instances of an entity matter morally exactly as much as a single instance, and
2) Identity is a continuum, not a discrete thing.

If point 1 is not obvious to you, consider this thought experiment:

Imagine two people uploaded into computer brains. One of them is uploaded onto a normal computer, and another one is uploaded into a giant computer where all the wires and circuits are 100 times as large in cross-section, so that 100 times as many electrons flow through them. Does the person in the giant computer have 100 times as much moral worth as the person in the normal computer?

Of course not, that would be ridiculous. So consider the following sequence of events:

1) An uploaded person is running on a flat computer chip
2) The computer chip is made twice as thick.
3) A barrier of insulation is placed halfway through the chip, so that the flow of electrons is exactly the same. 
4) The two sides are moved apart. All of the inputs and outputs remain exactly the same, the flow of electrons is completely unchanged from step 2.

It should be clear that no new person has been created. The moral worth of the thing in step 4 is exactly the same as the moral worth of the thing in step 1.

Of course, if the inputs begin to diverge, then at some point the two entities will become two different people, and then they will each have full moral worth. Even with identical inputs, random chance could cause them to remember or think about different things, so that they diverge over time. But as long as they are identical, then they are just like someone running on a computer with really thick wires.

This insight clears out a lot of nonsense. A universe tiled with hedonium (identical things experiencing bliss) has exactly as much moral worth as one unit of hedonium. There is no value in wasting any resources or making any sacrifices to instantiate multiple copies of anything for utilitarian reasons.

Now, consider the question of when the diverging entities start to count as a unique individual. It seems silly that a single electron path changing makes a copy count as a unique individual, i.e. taking its moral worth from 0 to 1. But you can say that about every electron. At which point does it become a new person?

Consider a thought experiment where you take a drug that will wipe out your memories at the end of the day. You will go to sleep, and then wake up as if that day never happened. How will you feel at the end of the day? It will probably be weird and unpleasant, but very few people will treat this as though they are about to die (and if you do count this as a death, then you must also believe that every time someone gets drunk it counts as someone dying). And very few people would think that giving a person such a drug should count as murder.

So, we already implicitly treat an entity who is very slightly different, the equivalent of a person gaining a few hours of memories, as less than a full person in a moral calculation. Two slightly-divergent ems are very similar to the difference between the person with an extra day of memories who will be lost, and the person who still remains. I do not know how much less. Maybe they count as 1.001 people, maybe 1.5 people. The main point is that there's some kind of function where you input the difference between an entity and a similar one, and it outputs the marginal moral value of that entity existing, and this function is continuous.

An important implication of this is that near-identical copies of AIs or ems doing near-identical tasks matter only slightly more than one of them. So fears of a moral catastrophe in this area are probably exaggerated.

And there is a limit to the total moral value of a population, based on how distinct the members of the population are. And once you start talking about vast numbers of people, it gets harder and harder to add a new member to the population who is actually distinct. Anyone new will be a minor variation of an existing person, and therefore count less. This means that you cannot create arbitrary amounts of value by simply expanding population size, which means that the Repugnant Conclusion is much less of a problem.

Thursday, April 22, 2021

Bounding Uncertainty of Addiction Costs

Measuring the net economic and social costs of a plausibly-addictive substance or behavior that people make moral judgments about is extremely difficult. Because of the addiction, we can't trust revealed preference, and because of social desirability bias, we can't trust stated preference. Without any reliable analytic technique, figuring out what's really going on is quite hard. So in the absence of widely agreed upon rigorous scientific research, people often default to their personal beliefs and judgments about what's really going on.

However, we can bound our uncertainty. We can measure what the total would be under various extreme simplifying assumptions, and then the true value must be somewhere in between.

Traditional economic techniques for measuring total social surplus are based on the assumption of forward-looking rational consumers and a non-addictive substance: people receive benefits from the thing, and take into account all harms they will personally suffer when deciding how much they will pay. In this case, the net benefit of the thing is consumer surplus (utility gained minus market price), plus producer surplus (profit minus opportunity costs), minus externalities.

Traditional public health analysis assumes short-sighted consumers and a completely addictive substance: people receive no benefits from the thing, only consuming it to avoid withdrawal symptoms, and do not take into account any of their future health harms. In this case, the net benefit of the thing is zero minus all associated costs including health effects, externalities, and all of the money that consumers paid for the substance. (Technically the producer surplus should be a transfer, not a cost, but they rarely account for that.)

There are two other combinations. Consumers might be extremely shortsighted and purchasing a non-addictive substance. In this case, the net benefit of the thing would be the consumer surplus minus all measured costs including both externalities and the 'internalities' or the health effects the consumer will later suffer.

It's also possible that consumers are forward-looking but addicted: they only purchase the substance to avoid withdrawal symptoms, meaning they get no overall lifetime benefit from the thing, but they do take into account expected future harms to themselves when deciding how much they'll pay to avoid withdrawal symptoms. In this case, the net benefit would be zero minus the externalities and production costs.

This allows us to produce a two-axis table, where the axes are how forward-looking consumers are, and how addicted they are. For 100% forward-looking consumers, the only harms are the externalities, and for 0% forward-looking, all harms are measured. For 100% addicted consumers, no consumer surplus is counted, and for 0% addicted, we measure the consumer surplus using traditional economic techniques. You can then choose a spot on the table based on your beliefs about what's going on in people's minds.

To demonstrate, we can do very brief and simple analysis of alcohol in the USA:

I personally believe that (on a drink-weighted basis) alcohol consumers are 40% forward looking and 80% addicted. They are somewhat aware of the long-term health consequences, but haven't fully internalized most of them. And their behavior is determined mainly by habituation or withdrawal avoidance rather than rational pursuit of pleasure. 

Obviously your beliefs may differ. So spend a few minutes thinking, and then once you've decided what you think reality looks like, you can pick a spot on the table I'm about to construct. Keep in mind that if you and your friends are relatively well-adjusted, in-control casual drinkers, your intuitions will be warped by your filter bubble. Most drinks are consumed by heavy drinkers.

Because I am doing this in my free time for fun and it wouldn't be fun to do a proper lit review, I grab data from the first paper I find online that seems credible, and use the following simplifying assumptions:

I assume that the only externalities are alcohol-caused crime and the costs to governments. All other costs are 'internalities' that only affect the consumer or their family unit and would be internalized by a rational agent. This means that while the total social costs of alcohol are about $250 billion, the external costs are about $100 billion (same source).

I assume that the producer surplus, i.e. economic profit, of the alcohol industry is roughly one-third of their accounting profit. (Note that in a fully competitive industry, producer surplus would be zero, so I'm assuming a bit of monopoly power, possibly driven by habit and brand loyalty) With a net profit margin of about 15%, this means that 5% of the money spent on alcohol is producer surplus. With total sales of about $250 billion, this is about $13 billion in producer surplus.

Data on consumer surplus of alcohol is much more sparse. The only semi-credible estimate that could be found in a lit review is based on London drinkers, and shows that utility from drinking is 150% of the price paid. So assuming that American drinkers are similar to London drinkers, then the consumer surplus for fully-rational informed consumers (utility minus purchase price) would be about $125 billion per year.

If consumers are completely non-addicted and forward-looking, then alcohol consumption in the USA has net benefits of $38 billion per year ($125 CS, plus $13 PS, minus $100 externalities). But once you move any significant amount from pure rationality, the benefits disappear and are replaced with significant net costs. 


Under my guess of 80% addicted and 40% forward looking, which was written down before I produced this table, net costs are about $350 billion a year. If we could magically get everyone to stop drinking, without suffering withdrawal costs, we'd see benefits valued at about a thousand dollars per person. Given that most of these costs come from problem drinkers, a hypothetical charity that managed to reduce binge drinking and alcohol addiction by 1% would generate benefits of about $3 billion a year.

Calculations are in this google sheet, feel free to copy it, and then enter other or better sources, or your own ranges or guesses for how addicted and forward-looking people are.

Edit 23 Apr: I just remembered/realized that the source I cited for alcohol harm did not fully monetize the life years lost. For a full analysis, you'd have to do that, and split them between the internalities and externalities. The main part of this post is the framework, so I'm leaving it as is, but please note that it underestimates the harms of alcohol by a lot.