Spencer Ackerman asks an important question about whether the failure to keep the Underpants Bomber off the flight to Detroit was really an intelligence malfunction, except in hindsight:
The inputs are that the guy’s dad says he’s dangerous; he’s Nigerian; he might be in Yemen; and al-Qaeda in Yemen may be looking to use a Nigerian in a forthcoming attack. Is that really enough?
The answer to that question most certainly requires a policy decision, not an intelligence decision. The intelligence community is drinking from a fire hose of data, a lot of it much more specific than what was acquired on Abdulmutallab. If policymakers decide that these thin reeds will be the standard for stopping someone from entering the United States, then they need to change the process to enshrine that in the no-fly system. But it will make it much harder for people who aren’t threatening to enter, a move that will ripple out to effect diplomacy, security relationships (good luck entering the U.S. for a military-to-military contact program if, say, you’re a member of the Sunni Awakening in Iraq, since you had contacts with known extremists), international business and trade, and so on. Are we prepared for that?
In retrospect, terrorism dots always look easy to connect, but people rarely think about all the other similar dots. If the information we had on Abdulmutallab should have been enough to keep him off the flight to Detroit, then we’re also saying that that’s the level of information that should be sufficient to keep anyone off a flight to Detroit. Is that what we want?
Maybe. But it’s far from obvious after just a cursory glance. Public pressure is invaluable to keep the federal government honest, but it can also become a myopic feeding frenzy. The intelligence community plainly needs to account for itself here, and upon investigation we might decide that there really was a systemic breakdown. But it’s way too early to say that with any confidence.
This is a conventional problem for a class in decision analysis. Any given screen produces some mix of false positives – people who get screened out who weren’t in fact a danger, people who get treated for a disease they turn out not to have – and false negatives: people who didn’t get screened out who were dangerous, or didn’t get treated and were sick. And any given screen has some set of costs: extra delays at the airport or exposure to diagnostic X-rays.
There are some screens that are clearly sub-optimal and therefore wrong, in the sense that they yield more false positives and more false negatives than some alternative that costs less. Eliminating those losers gets you to the set of “efficient” (non-dominated) screens. Among the set of efficient systems, there are tradeoffs among false positives, false negatives, and cost. No actual system has zero false positives or zero false negatives, and at any given cost of screening requiring fewer false negatives will mean accepting more false positives.
The question is one of ratios. Clearly, it’s worth keeping a thousand people off airplanes to avoid one mid-air explosion, but not worth keeping 100 million people off airplanes to avoid the same result.
So Ackerman’s question comes down to: How many people without bombs would a screen fine enough to catch the Underpants Bomber have kept out last year? My first-blush guess is that the answer would be in the dozens, not the tens of thousands, if you factor in a ticket bought for cash the day of the flight. If that’s right, then the screen as administered wasn’t tight enough.
But Ackerman’s analysis misses a key point: you’re not limited to one level of screen. A false positive on a mammogram is a bad thing, but its immediate result is an unnecessary biopsy, not an unnecessary mastectomy. In general, you want the first screen to be cheap to administer and very tight (“highly sensitive” in the technical jargon), accepting that it will produce a big crop of false positives, because the result of triggering that alarm is a follow-up test that can be much more expensive but is designed to be much less prone to false positives (“highly specific”).
There was no need to decide, just based on the information in hand, whether to let Mr. Abdulmutallab board the flight. All you needed to figure out was that he needed to have a body scan and a careful hand-luggage check before boarding. You might not want to do that to every passenger, but you’d be willing to do it to tens of millions of innocents to prevent one explosion.* Thought of that way, I’d say that the warning from Abdulmutallab père should have been enough, all by itself, to justify asking Abdulmutallab fils to step out of line and see the nice man in the booth.
Of course you’d want something much more substantial to put someone on an actual “no-fly” list; putting someone on that list is a drastic restriction of an internally recognized human right, and should be restricted to people you don’t want on the plane even if they’re not carrying a bomb. But it shouldn’t take much information just to trigger an extra look.
* Put the expense to the government of the required check at $25, and the willingness-to-pay of an innocent passenger not to undergo it at $75; both seem generous to me, but substitute your own numbers if you disagree. So 10 million screens on false-positives cost $1 billion. Saving 400 lives is worth something like $4 billion, and of course the ancillary costs of a successful terrorist attack of that scale would be at least an order of magnitude larger. Thus doing the scan unnecessarily 10 million times is much cheaper than failing to do it once when it was necessary.