The Rev. Thomas Bayes, curveball hitter

Does the intelligence process suffer from basic failures in probabilistic reasoning?

Just had a long conversation with an expert in engineering risk and systems-failure analysis who has been looking hard at the problem of intelligence analysis.

According to this expert, the current process suffers from two failures of probabilistic reasoning.

Assume some proposition X about a state of the world that might be true or false: “Iraq currently [as evaluated in early 2003] has an active nuclear weapons acquisition program” or “In the next three days there will be an attempt at a terrorist act in the United States which, if it succeeded, would kill more than 100 people.”

The classical or Aristotelian way of thinking about the world is that proposition X is either true or false, and that the problem is to figure out which. The Bayesian way of thinking about the world is that, based on what you know right now, the proposition can’t be decided to be either true or false, but that you can assign a probability to it reflecting how often, out of some large number of cases in which the facts were as you now see them — call that the “signal” Y — X would turn out to be true.

That is, you’re interested in the conditional probability of X, given Y: the ratio of the number of cases in which X is true and Y is observed divided by the total number of cases in which Y is observed.

The current intelligence process, I was told, is mostly Aristotelian, working toward a consensus on whether X is true or false rather than toward a Bayesian probability estimate of X. As a result, it conveys to decision-makers less knowledge than the process leading up to that consensus contained.

In doing Bayesian analysis, a key question turns out to be the interdependencies among the probabilities of the individual data points that make up the current “signal.” If they’re independent of one another, then, if several bits of data point in the same direction, that should move your probability estimate a great deal. But if the observations aren’t independent — if one influences the probability of another or they’re both influenced by some third factor — then the apparent confirmation one observation provides for the hypothesis generated by another may be only apparent.

Say you’re trying to figure out whether Joe, who works for you, is stealing from the firm. The accounting department says his expense reports are in the top percentile among people with his job description. Based on that, he could be stealing, but it’s also possible that he’s honest and simply happens to have a good reason to generate lots of expenses. (After all, someone’s going to be in the top percentile.)

Now imagine that Joe’s supervisor says he thinks Joe is dishonest, and one of his co-workers says the same. Cumulatively, that’s a lot of evidence. Either he’s stealing, or he has good reasons for generating heavy expenses AND he’s acquired a false reputation for dishonesty. Since the joint probability of “Joe has good reasons for generating heavy expense” AND “Joe is an honest person who has somehow acquired a reputation in the office as a crook” is lower than the probability of “Joe has good reasons for generating heavy expenses,” the probability of the alternative — “Joe is stealing” is higher than it was before you learned about his reputation. Maybe it’s time to ask for an audit.

But if it turns out that the supervisor’s opinion is based on having seen the same accounting report you did, and that the co-worker heard it from the supervisor, their opinions don’t really give you any information you didn’t already have.

Apparently that’s how the “Curveball” source wound up convincing the CIA that Iraq had a major biological weapons program. “Curveball’s” allegations were checked against a database of background information most of which had been supplied by “Curveball” and other ANC sources. And — Surprise! — it all fit together perfectly.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com