Statistics, politics, and failures of intelligence analysis

Maybe the Rev. Mr. Bayes couldn’t hit a major-league Curveball after all.

My friend within the ranks of working national-security analysts — whose wisdom has been quoted in this space before — isn’t convinced by the idea offered in a recent post here that the problems of intelligence analysis can be reduced to problems of statistical reasoning:

While everything you say about Bayes and Curveball is right, the problems at CIA cannot be ascribed to ignorance of Bayes’ rule.

First of all, the CIA has been told for years about Bayesian estimation methods (and also about biased heuristics a la Tversky and Kahneman — there is a book by CIA analysis guru Richards Heuer that is full of this stuff, and I think it is taught to analysts-in-training) but the senior analysis folks at CIA have been pretty determined not to use any structured methodology, partly because this would externalize knowledge and argument and make them available for inspection and would devalue the currency of hoarded information and expertise which is right now the coin of the realm–and a major reason “information sharing” is hard to achieve within the IC.

Second, the CIA already pays lip service to the need to assess the quality of sources and whether they are independent. The credibility of information sources and the possible existence of common sources being reported through multiple channels is a recognized problem within the Agency, and is not really at issue between what you call the Aristotelian and Bayesian approaches. You don’t have to be a Bayesian to evaluate sources. Courts do this, for example with the hearsay rule and through cross-examination.

However, in the past, compartmentalization at the Agency often led the DI (analysts) to be ignorant of information that could be used to assess the likely independence of sources. Repeated details across accounts could be viewed either as confirming a true fact seen from multiple sources or a common, possibly innacurate, source of reporting. The counter-intelligence folks used to scrub information coming from Soviet defectors for deception, but I doubt the same vigilance was applied to INC-related sources. The problem is complicated further when some of the information is relayed by other national intelligence agencies which do not always share details on the provenance of their sources.

Third, the real problem has to do with what the military calls command influence (in the context of courts martial and similar proceedings) and a background structural bias caused by differential rewards for different conclusions. The CIA was founded to prevent another Pearl Harbor. “Worst Case” assessments are a way of life. No one ever asks what the down-side of a threat overestimate might be; everyone knows the danger of a threat underestimate. This lesson gets politically reinforced periodically– Team B when George Bush was DCI is a typical example; Bob Gates’s conforming intel to the desires of the White House was legendary at CIA, and Tenet’s political weakness in the Geo Bush admin led to the same effect). There wasn’t anyone pushing back on the tendency to extrapolate the WMD threat in a way that went beyond evidence-in-hand. I am sure that the intimidation that Bolton attempted regarding the Cuba WMD issue was repeated on a much larger scale and from a much higher level regarding Iraq WMD, but it didn’t much have to be because of the background bias to worst-case assessment.

Finally, in past administrations (for JFK after Bay of Pigs) there has always been someone at the NSC who is skeptical of the intel consensus and who probes it. The MBA president takes everything he is told at face value.

This administration is so ideological that the probes have all been to on the side of giving the benefit of the doubt to sources like curveball. After all, when Condi Rice was told that the intel on Iraqi purchases of Uranium from Niger were untenable, her reaction was not to remove the claim from the State of the Union but simply to rewrite it so it said the British say so (if memory serves…). This is intelligence for propaganda not intelligence analysis, and it’s the Bush adminstration’s fault.

Getting back to Bayes vs. Aristotle in the CIA, it would be interesting to do a history of likelihood language in the daily products and intelligence estimates. The whole point of intelligence “estimates” was an acceptance that there was a lot we didn’t know (originally about the “denied areas” where we had no access to human sources and where technical means of

collection were originally also very limited.) The possibility that finished intelligence would contain information of which we were unsure was explicitly recognized. The CIA used to have a whole office that policed the likelihood statements in its products. There was a dictionary that translated between numerical probability ranges and English phrases. (I don’t remember the details at all but maybe “is likely to” translated to 65% to 80%.) So this office would call up and quiz analysts to be sure that there phrases corresponded to these numerical ranges.

That office no longer exists, which is not in itself a bad thing, but it may indicate less discipline in accepting the fact that estimates are uncertain. My impression is that the bargaining around formal etimates tends to reduce the willingness to be open about uncertainty, especially in the politicized environment mentioned above. The customer has a considerable influence on this. I was told by a career CIA employee (not, I think, a Democrat) that immediately on George W. Bush’s first inauguration, those preparing the President’s Daily Brief were told to reduce the level of sophistication of the content– almost the equivalent “grade reading level” to match the new president’s intellectual style. So that could also be part of the reason that all the qualifiers were stripped out of the Iraq WMD intel in the version that went to the White House.

Even before Bush, the summary nature of the daily products meant that they always begged more questions than they answered; and the only way to get fully satisfactory elaboration was to go and talk to the analysts directly.

By the way, there’s a book coming out next week by Rob Johnston, an anthropologist formerly of IDA who’s now at the CIA’s Kent Center for the Study of Intelligence. It’s the result of two years of ethnographic study of intelligence analysts beginning after 9/11. I haven’t seen the ms., but

it’s probably worth a read.

I should note for the record that the risk-analysis expert whose thoughts were quoted in my earlier post reported that President Bush in fact often asks questions of the “What are the chances that … ?” variety, suggesting that not all of the problem here is on the demand side.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: