The latest item in Brad DeLong’s “Mathematical Calculations” series is a version of what the philosophers call “Newcomb’s problem,” or “Newcomb’s paradox,” though the relative contributions of Newcomb and Nozick to its formulation are, I gather, unclear. Here’s the problem as DeLong states it:
An all-knowing alien who has a perfect computer model of your mind lands on earth. Xhsbr (that’s a pronoun, not a proper name) shows you a box with two compartments, one compartment of which is clear and the other compartment of which is opaque. Each side has a door. You can see $10 in the clear part of the box. The alien says that xhsbr has analyzed your psychology, and if you are the kind of human who would not take the $10 xhsbr has put $1,000,000 in the other, opaque compartment, which will be yours. But if you are the kind of human who would take the $10, xhsbr has put nothing in the other, opaque compartment. The alien says that you must first open the door to the clear compartment (and take the $10 or not) before the door to the opaque compartment will open. The alien says that the door to the clear compartment will only open once.
Xhsbr says that there will be no sanctions or negative consequences if you take the $10–that xhsbr will fly off and never return.
Xhsbr flies off. You are left with the box. You open the door to the clear compartment. You are completely certain that nothing the alien can do now affects how much money is in the closed, opaque compartment.
Do you take the $10 from the clear compartment before you open the other one? It is, after all, free money–either the $1,000,000 is there or it isn’t, and whether you take the $10 has nothing to do with that. On the other hand, you know that the alien has been right in every single one of 1000 other experiments xhsbr has conducted around the galaxy in the past two years. So you know that the way to bet is that people who take the $10 find nothing in the opaque compartment, and people who leave the $10 find $1,000,000 in the opaque compartment.
What do you do?
One reader notes that this is a problem in philosophy rather than mathematics, which is fair enough comment. Another notes that the claim about “infallibility” is hard to reconcile with intuitions about free will (or, I would add, the availability of true randomization devices: what if you set up a mechanical array that makes the action of taking the $10 or leaving it depend on some quantum-level event?) and that the problem is more usually formulated in terms of your observation of the results of choices made by others: they always, or almost always, correspond to the rule that the greedy are punished and the abstemious rewarded.
The point of the puzzle is to challenge the idea of “dominance” as used in decision analysis and game theory. A dominant choice is one that makes you better off whatever the other player does, or however some uncertainty is resolved. In this case, taking the $10 leaves you either with $10 instead of nothing or with $1,000,010 rather than $1,000,000, since by hypothesis the decision about whether your get the $1 million or not is already made. Thus it is a dominant choice, and you should take it. But it is paradoxical that those who make the “right” move do so much worse than those who make the “wrong” move.
Here’s my proposed solution to the paradox:
Consider the problem in advance. Clearly, in the circumstances as described, those who are so constituted as to leave the $10 are better off than those so constituted as to take it.
I therefore now made a public promise to always leave the $10 if given that choice. Violating such a commitment is not something I would do for $10, so when the time comes I will leave the $10 because of the commitment. That is, by making the commitment, I make leaving the money, rather than taking it, the dominant choice. I’d rather have nothing but my integrity than have $10, and I’d rather have my integrity and $1,000,000 than not have my integrity and $1,000,010.
If the circumstance is actually as stated, I will then collect $1 million, and offer words of thanks and praise to Newcomb and Nozick for causing me to think the problem through.
Moreover, I hereby resolve to deal analogously with any analogous problem that I may encounter.