Matthew Yglesias agrees with the argument made in the previous post that the good military outcome of the war doesn’t mean that the people who thought the military outcome would be good were right: all we know is that they said a probability was low and, in one case, the event in question (a military disaster) didn’t happen.
Matthew cites Derick Parfit (whom he quotes at some length) as authority. Parfit’s point is that, while a tiny one-time chance of an ordinary disaster can reasonably be ignored (the one chance in a million that if I drive fifty miles each way to go to stargazing I will kill myself, or someone else, in a car crash is not a good reason for me not to go stargazing), a tiny chance of huge disaster, or an often-repeated chance of an ordinary disaster, has no such exemption.
I’m always delighted to find myself in such distinguished company. But that as profound a thinker as Parfit should think it necessary to argue a point so obvious to anyone who has been introduced to the concept of “expected value” is a measure of the mutual incomprehension between contemporary moral philosophy and the worlds of microeconomics, operations analysis, and policy analysis. There are large losses on both sides; the average policy analyst can’t draw a clear distinction between utilitarianism and other consequentialist doctrines, and the average moral philosopher writes as if any numerical operation more complex than counting to three were taboo. Things might have been different but for Frank Plumpton Ramsay’s untimely demise.