Models, predictions, plans, and decisions

Dr. Manhattan links to a post by Tim Blair linking to an essay by Michael Crighton.

Crighton’s lecture is interesting, complex, and cranky (his romanticized view of the scientific process is in flat contradiction to scientific studies of the actual scientific process). Dr. Manhattan, Tim Blair, and especially Tim Blair’s commenters all boil it down to the thought (which I have found attributed to Yogi Berra, Sam Goldwyn, and Niels Bohr (!) — does any reader have an actual source document?) that prediction is dangerous, especially about the future.

Of course that’s true, for at least five reasons:

1. Models are necessarily under-specified.

2. The correspondence of the causal relationships they embody to actual phenomena is never known to be perfect.

3. The observable initial conditions are never perfectly observed.

4. There are always unobservable initial conditions. (What if a big asteroid hits? That might be predictable in the sense that the asteroid is already on its collision course with the Earth, but from a practical viewpoint it may be unobservable.)

5. Some processes are chaotic, such that arbitrarily small errors will cumulate to arbitrarily large deviations from prediction.

In addition, in long-term social modelling, there’s a sixth source of irreducible uncertainty: technological change. As Popper points out, the rate of techological change is never perfectly predictable; if we knew now what we are going to know later, we’d know it now. And while rules of thumb such as Moore’s Law can usefully predict the rate of change in established fields, the larger the innovation, the less predictable it is. Fusion power, which has been just around the corner since I was a child, might arrive some day.

Right, then. We can’t know what the world will look like in 2100. But unless we also don’t care what the world looks like in 2100, or unless we think our current actions have zero predictable impact on what he world will look like in 2100, we need to make decisions now — we are, in fact, making decisions now — in which results a century hence are part of the objective function.

Uncertainty about the results of our actions will indeed suggest that we should discount predicted far-future effects vis-a-vis more predictable near-future effects (this in addition to the normal discounting for the time-value of resources). But not to zero, surely?

Moreover, it’s reasonable to be risk-averse over very large changes; that gives rise to, if not the vaunted “precautionary principle” itself, at least a principle that paying relatively small costs to somewhat reduce the probability of huge disasters may be worthwhile even if the cost is greater than the expected present value of the reduction in damage. That’s not some fancy philosophical principle; that’s just the same thinking that leads to buying insurance even though the premium is in general larger than the expected present value of the claims.

The example of weather forecasting is repeatedly offered as an illustration of the uselessness of modeling the future as a guide to action. But that must surely be wrong. Due to better data-gathering and better modeling, short-term forecasting is in fact much better than it used to be, and no one seriously argues that the famous uncertainties surrounding hurricane predictions means that we shouldn’t issue warnings, and even evacuation orders, based on those forecasts.

Moreover, even when the details are completely unpredictable, the gross trends may not be: ever since the Neolithic Revolution, farmers have been planting in the spring on the expectation that the average daily temperature would tend to rise between February and August. The weather is unpredictable, but it’s not perfectly unpredictable, and it would be absurd to discard the predictive value of whatever models we have at hand, whether it’s a global warming model or the weather heuristics in Hesiod’s Works and Days.

Like the affected contempt for “planning,” which it closely resembles, the denunciation of “modeling” is largely, though of course not entirely, insincere and ideological. The corporations that make the contributions to the foundations that support the denunciations of “planning” and “modeling” wouldn’t for a minute consider not having plans and using predictive models in the conduct of their own affairs. The smart ones make flexible plans and treat the predictions of models with caution, that’s all.

Of course, none of this is purely an abstract debate. It’s all part of the argument is about global climate change. That global temperatures are secularly rising is no longer subject to doubt. That some current human activities tend to raise global temperature, and that those activities have been and are now growing rapidly in volume, is also a matter of fact, not of debate.

How much of the temperature/time gradient we currently observe relates to human activity is an open question, but the answer “An amount too small to care about” seems implausible.

(Note that if we are in the middle of a secular warming trend with geophysical roots, that is likely to increase, rather than decreasing, the practical importance of the human contribution in the future. If the damage associated with a given temperature change rises more-than-linearly with the size of the change, then a two-degree temperature increase due to human activity will have a greater cost if it comes on top of a two-degree increase with geophysical causes than if it comes alone.)

There remain two open questions:

1. How much damage (and how much offsetting gain) will be associated with different levels of warming?

2. What policies could reduce the degree of human contribution to warming, at what cost?

So without presuming to judge the debate between global warming believers and skeptics, a cautious policymaker would, I submit, be looking right now for relatively low-cost ways of reducing the human contribution to global warming, such as a shift from coal-fired to nuclear electricity generation. Whether higher-cost measures are also warranted is a much harder question, but it is still a question, one that no amount of obscurantist raving about the mystical unknowability of the future can answer.

[By the way, am I the only one to have noticed that, insofar as the “Nuclear Winter” folks were right, we have the solution to global warming right at hand? The only technical question is how many cities we’d have to nuke to generate the degree of cooling necessary to offset any given degree of warming. Which cities to nuke is, of course, a political, rather than a technical, question.]

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: