Better safe than sorry?

Ever since I heard of it, I’ve been impatient with the proposed “precautionary principle”; it seems to me it ought to be called the Animal Crackers principle, after the college president played by Groucho Marx in the movie of that title, who sings a song with the chorus:

I don’t care who proposed it or commenced it,

I’m against it.

By what cockamamie reasoning could the fact that a risk is unknown be taken to imply that it is unacceptably large? I mean, really!

But Sasha Volokh proposes a different and more plausible way to think about it than the arguments I’ve seen in the past: that we should consider the variance, as well as the expected value, in choosing risks.

The principle of diminishing marginal utility suggests that rational decision-makers ought to be risk averse with respect to high-stakes decisions. Bernoulli’s St. Petersburg Paradox and its extensions show that, for any player with a finite initial endowment, a series of positive-expected-value gambles will lead to almost certain ruin if the stake is significant compared to the endowment.

Think of a project that would increase the US GDP per capita by a tenth of one percent, unless it ran into a one-in-ten-thousand chance that led to a reduction of US GDP per capita by 50%. Clearly the expected value is positive, and just as clearly (it seems to me) we ought to make it illegal if that’s the only way to stop it. And I’d think the same thing even if the probability of catastrophe were much smaller than that.

I would differ with Sasha on one point, though: it’s not, strictly speaking, the variance that matters. A very-high-variance gamble can be quite attractive, as long as the stake is small compared to the initial endowment. A project with 999 chances in 1000 of making $100, and 1 chance in a thousand of losing $95,000, has a huge variance, but is still attractive for a company that can take the hit if the odd chance happens. (That’s roughly the business of an insurance company, for example.) The same gamble wouldn’t appeal to me at all as a personal investment, unless I could figure out a way to lay off the risk.

The conceptually hairy issue arises where we have no good basis for calculating the risk: in the case of fundamentally new technologies. And that is the situation for which the precautionary principle was designed. Sasha is clearly right that in its most radical form the precautionary princple is self-defeating, since inaction also carries unknown risks. But it looks from the argument above as if any proposal where a plausible story can be told of truly catastrophic risk (i.e., risks equivalent to substantial fractions of total national or world wealth) ought to be forbidden until the probability attached to the risk can be plausibly quantified. This is much stronger than Sasha’s proposed “slight bias” against risk; it’s most of the way to the precautionary principle itself, as long as the worst conceivable case is bad enough.

As David Burmaster has pointed out, worst-case thinking in routine environmental management is a recipe for over-regulation.

[See Burmaster, D.E. and R.H. Harris, “The Magnitude of Compounding Conservatisms in Superfund Risk Assessments”, Risk Analysis 13(2):131-134 (1993).] But risk aversion means that a sufficiently bad “worst case” ought to be enough to kill a project, even with what would otherwise be a negligible probability attached to it. That’s not the answer I wanted, but it seems to be the one I just got.

As a committed technological optimist, I will be deeply grateful to whoever can find the error in this reasoning.

UPDATE

Kieran Healy responds, and I comment, here. Briefly: he wants to know why I mind the conclusion when it means restricting innovation but not when it means vaccinating immediately. Brief answer: the opportunity cost of forgoing, e.g., genetic modification of food species, is much higher than the cost of a vaccination program.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com