Decision theory and the Underpants Bomber

Keep him off the flight, just on the information available? Maybe not. But screen the hell out of him at the airport? Absolutely!

Spencer Ackerman asks an important question about whether the failure to keep the Underpants Bomber off the flight to Detroit was really an intelligence malfunction, except in hindsight:

The inputs are that the guy’s dad says he’s dangerous; he’s Nigerian; he might be in Yemen; and al-Qaeda in Yemen may be looking to use a Nigerian in a forthcoming attack. Is that really enough?

The answer to that question most certainly requires a policy decision, not an intelligence decision. The intelligence community is drinking from a fire hose of data, a lot of it much more specific than what was acquired on Abdulmutallab. If policymakers decide that these thin reeds will be the standard for stopping someone from entering the United States, then they need to change the process to enshrine that in the no-fly system. But it will make it much harder for people who aren’t threatening to enter, a move that will ripple out to effect diplomacy, security relationships (good luck entering the U.S. for a military-to-military contact program if, say, you’re a member of the Sunni Awakening in Iraq, since you had contacts with known extremists), international business and trade, and so on. Are we prepared for that?

And Kevin Drum poses the decision problem clearly, and warns about a rush to judgment:

In retrospect, terrorism dots always look easy to connect, but people rarely think about all the other similar dots. If the information we had on Abdulmutallab should have been enough to keep him off the flight to Detroit, then we’re also saying that that’s the level of information that should be sufficient to keep anyone off a flight to Detroit. Is that what we want?

Maybe. But it’s far from obvious after just a cursory glance. Public pressure is invaluable to keep the federal government honest, but it can also become a myopic feeding frenzy. The intelligence community plainly needs to account for itself here, and upon investigation we might decide that there really was a systemic breakdown. But it’s way too early to say that with any confidence.

This is a conventional problem for a class in decision analysis. Any given screen produces some mix of false positives – people who get screened out who weren’t in fact a danger, people who get treated for a disease they turn out not to have – and false negatives: people who didn’t get screened out who were dangerous, or didn’t get treated and were sick. And any given screen has some set of costs: extra delays at the airport or exposure to diagnostic X-rays.

There are some screens that are clearly sub-optimal and therefore wrong, in the sense that they yield more false positives and more false negatives than some alternative that costs less. Eliminating those losers gets you to the set of “efficient” (non-dominated) screens. Among the set of efficient systems, there are tradeoffs among false positives, false negatives, and cost. No actual system has zero false positives or zero false negatives, and at any given cost of screening requiring fewer false negatives will mean accepting more false positives.

The question is one of ratios. Clearly, it’s worth keeping a thousand people off airplanes to avoid one mid-air explosion, but not worth keeping 100 million people off airplanes to avoid the same result.

So Ackerman’s question comes down to: How many people without bombs would a screen fine enough to catch the Underpants Bomber have kept out last year? My first-blush guess is that the answer would be in the dozens, not the tens of thousands, if you factor in a ticket bought for cash the day of the flight. If that’s right, then the screen as administered wasn’t tight enough.

But Ackerman’s analysis misses a key point: you’re not limited to one level of screen. A false positive on a mammogram is a bad thing, but its immediate result is an unnecessary biopsy, not an unnecessary mastectomy. In general, you want the first screen to be cheap to administer and very tight (“highly sensitive” in the technical jargon), accepting that it will produce a big crop of false positives, because the result of triggering that alarm is a follow-up test that can be much more expensive but is designed to be much less prone to false positives (“highly specific”).

There was no need to decide, just based on the information in hand, whether to let Mr. Abdulmutallab board the flight. All you needed to figure out was that he needed to have a body scan and a careful hand-luggage check before boarding. You might not want to do that to every passenger, but you’d be willing to do it to tens of millions of innocents to prevent one explosion.* Thought of that way, I’d say that the warning from Abdulmutallab père should have been enough, all by itself, to justify asking Abdulmutallab fils to step out of line and see the nice man in the booth.

Of course you’d want something much more substantial to put someone on an actual “no-fly” list; putting someone on that list is a drastic restriction of an internally recognized human right, and should be restricted to people you don’t want on the plane even if they’re not carrying a bomb. But it shouldn’t take much information just to trigger an extra look.

* Put the expense to the government of the required check at $25, and the willingness-to-pay of an innocent passenger not to undergo it at $75; both seem generous to me, but substitute your own numbers if you disagree. So 10 million screens on false-positives cost $1 billion. Saving 400 lives is worth something like $4 billion, and of course the ancillary costs of a successful terrorist attack of that scale would be at least an order of magnitude larger. Thus doing the scan unnecessarily 10 million times is much cheaper than failing to do it once when it was necessary.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com

11 thoughts on “Decision theory and the Underpants Bomber”

  1. Good point–but implementing a "second screen" requires communicating information (of more complexity than "no-fly") to the TSA screeners, who are already swamped because of the security theater of scanning everyone and all baggage. (It is not clear if the additional screenings that are already done are driven by anything other than a random-number system–not that this is necessarily a bad thing).

    It might make more sense for the process at the terminal to consist of a step of checking names against a list, then putting 90% of the people into a line with no checking at all, 10% into one with

    the current checks–and then the same division, so 1% get the most intensive look. And try to make the

    sorting process better, rather than the screening one applied to everyone. Unfortunately, I'm not sure this kind of statistical approach would be deemed sufficient.

  2. Putting a dollar value on human life is obviously problematical and devoid of a "correct" answer. Mark obviously has no objection to trying, since he builds such a value into his analysis. The figure chosen, $10 million dollars each, is several times higher than most analyses. (The additional 10-fold multiplier for terrorism deaths comes out of the wild blue yonder.) If we use the high value $10 million per life, the historical record suggests that the average "cost" per flight segment of the risk of being killed by terrorism ala 9/11 or Lockerbie is well under a dollar per person. It is several times less than the still-small risk of being killed in a crash caused by something like weather issues or mechanical failure. The bottom line is that we do not tie ourselves in elaborate and demeaning knots with airline security because it is cost-effective in terms of lives saved. We do it because pressurized tubes of people flying through the air at high speed radiate a peculiar vulnerability, no matter what the statistics say. Airliners make an obvious target for a suicide bomber seeking a highly dramatic exit, or for any evil doer of whatever motive. The obsession is inevitiable and understandable, and the issues are difficult, but dollars per life analysis is off the mark.

  3. Ken, this isn't about "putting a dollar value on human life." It's about putting a value on avoiding small risks of death. The $10 million figure is only slightly on the high side of the standard range used in benefit-cost analysis. (Viscusi estimated $7 million back in 2005, and there's been both inflation and real economic growth since then.) http://papers.ssrn.com/sol3/papers.cfm?abstract_i… International airline passengers are much wealthier than average, so they presumably put higher values on their own lives than average.

    The estimate is based on studies of the compensating differentials for high-risk occupations. It's only "several times higher" than the "lost-wages" analysis beloved of the tort defense bar.

    As to the costs of a successful airline strike, I'm not inflating the value-of-avoided-death figure; the value of avoding those deaths, to the people suffering them, is independent of whether they die flying or getting hit by lightening. But the social and economic impact of a high-publicity terror incident would be profound; if you think the right answer is less than $40 billion, you can say why, but that additional value isn't essential to the structure of my argument.

  4. I accepted the $10 million dollar figure for analysis. The $40 billion seems to me an order of magnitude mushier than the already-soft value of a life calculation; I will wait for your breakdown before trying to respond as to its accuracy. I would note, however, that a very great deal indeed would depend on the circumstances. An event in the heart of a major city — such as the first World Trade Center plane if it stood alone as a separate incident — will have much greater collateral consequences than a plane going down in the ocean — think Air India Flight 182 in 1985. Concern about this amorphous "social and economic impact" may cover some of the same territory that I would identify as the inherent vulnerability of soaring through the air in a tube. I continue to believe that dollars per life analysis is largely off the point here. The irony is that even taking very full account 9/11, Air India, and Lockerbie and for good measure hypothesizing that the shoe bomber and the undies bomber had succeeded, air travel is statistically very safe, and the risks that do exist are mostly not from terrorism.

  5. Ken Doran is correct that flying is statistically very safe. The problem is that in the wake of another successful terrorist attack a large percentage of the American populace would refuse to fly, statistics be damned. That the “social and economic impact” would be largely irrational is irrelevant. It would still happen.

  6. Mark is vastly overstating the costs of putting a Nigerian citizen (not resident in the US) on a list that prevents him from flying to the US. There is no right–recognized or otherwise–to travel by air to the US.

  7. Someone on TV pointed out that airlines in Africa often prefer to be paid in cash because of the many scams they suffer from other forms of payment. I don't personally know how accurate that information is, but if it is accurate that blows the idea that paying for the ticket in cash is a risk factor out of the water.

    Since Nigeria has a reputation for scams and has more than one connection to the perpetrator, I suspect that the cash payment for the ticket is quite useless as an indicator of a possible terrorist.

  8. Not that it really matters when a discussion is this amorphous, but willingness-to-pay as a measure of cost to passengers almost certainly skews the cost figures down. if you went for ask rather than bid, you might get closer to the costs that are being imposed. Even the current security regime deters millions of trips a year, and an enhanced one (especially pseudo-random as the suggested one would be) would, I think, deter far more.

  9. The guy bought a one-way ticket, cash, and had no checked bags. What did he need to do to be searched, wear a sign saying I HAVE A BOMB IN MY PANTS? And ordinary people whose name is "Tom Smith" get searched if there's a Tom Smith on the list, but a guy named Abdulmutallab, whose own father reported him to be a terrorist, is waived on through?

Comments are closed.