It occurred to me today that I have never seen any attempt at calculating how much society should be willing to spend to prevent an AI Catastrophe. I am pretty good at this kind of thing, so here's a quick Fermi estimate:
Conceptually, this is a lot like buying a life insurance policy for the human race. There is some probability of a catastrophe, so the annual amount we should be willing to pay for insurance is the cost of the catastrophe times its annual probability.
The Value of a Statistical Life in the US is about $10 million, or $1x10^7. Over the relevant time frame in which an AI catastrophe is likely, world population will likely be stabilizing at about 10 billion, or 1x10^10. So the value of the human race, in current US dollars, is about $1x10^17.
Now we find the probability of a catastrophe. For convenience, I use numbers presented in the article linked above, which is a pretty good summary of the current understanding of the field.
More realistically, we will continue on our current technological course and try to make things safer. Even if AI research only had a 1 in 100 chance of guaranteeing Friendly AI, we should be willing to pay a trillion dollars a year for it.