Getting what you measure?

Incentives and the Tarnaevs: You get what you measure, even in the counter-terrorism business.

A friend who spent years working for the Office of the Inspector General (OIG) in a giant federal department reflects on how the Tsernaevs might have slipped through the cracks:

I wonder if law enforcement management and incentive systems contributed to Tamerlan Tsarnaev being discarded as a risk. At OIG, new cases are entered as “open and assigned” randomly to investigators. It then rewards them for “closing” the most O&As. Assembling a difficult case for prosecution counts the same as classifying a case for no threat-no action. Would such a system subtly encourage rapid closure of low-level cases (and cutting corners on the cases handed to the prosecutors)? Theoretically, random assignment evens out the work load within an office.

I think the FBI used a similar system, since OIG originally borrowed it from them. The FBI may have modified the original system, but OIG continues to use the original.

Note that we don’t know that the FBI made a mistake in this case, because we don’t know how many utterly harmless people the KGB tried to make trouble for. But the question is a sound one: What are the incentives facing officials who get such reports?

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact:

7 thoughts on “Getting what you measure?”

  1. You could give investigators more credit when they do the more difficult task of referring a case for prosecution. But then you’d be giving investigators an incentive to find people guilty, which would also be a problem.

  2. It’s worth mentioning that the FBI seems to have put all its resources into ginning up entrapment cases against marginal characters. It’s a good way to get convictions without ever having to do any investigative work.

    1. True dat. But they limit this to “terror” cases. I wouldn’t so much blame the FBI agents for this state of affairs as FBI management. “Terror” cases garner headlines for the agency, not the agents. And FBI management, of course, is responding to its own set of perverse incentives.

  3. Good points all around, the most important being that, as Mark says, we honestly don’t know at this point whether the FBI slipped up. It’s really easy to look back after the fact and say a mistake was made, but that isn’t necessarily so. And even if it was so, a mistake in one out of how many cases? It would make a difference whether it was one out of a hundred or one out of a couple of hundred thousand.

    I don’t know how they handle these reviews now. But going by what Mark’s friend and Ragout both point to, it is clear that any metric used in performance reviews is going to create incentives to maximize whatever it is. Metrics are both an administrative convenience and a quantified, “objective” standard intended to limit the effects of bias and lessen vulnerability to lawsuits. Given that, if it turns out that there is an effectiveness-of-review problem, maybe attention should be turned to the review process itself instead of performance metrics. Should more than one person be involved, for example by briefly talking each case through with a second person before making a decision (assuming they don’t do that now)? Saying even a couple of sentences out loud can often be helpful. Of course something like that would obviously invite congressional whining about redundancy and waste, but that shouldn’t be completely insurmountable for a creative bureaucratic mind.

    1. “Given that, if it turns out that there is an effectiveness-of-review problem, maybe attention should be turned to the review process itself instead of performance metrics.”

      This sounds right, and your two-heads-are-better-than-one suggestion seems simple enough that it might help somewhat. But incentivizing good, complex decision-making is a heck of a problem. (As Jimmy McNulty points out here, though, we’re always making decisions, whether we like it or not: .)

    2. The problem, it seems, is that the choice of things an agent can do with a case is too limited. It’s obvious that you don’t want agents dithering endlessly on whether to close a file or refer for prosecution (that way lies featherbedding), but a note to revisit someone in six months or a year to see if they’ve done something actionable (as opposed to merely being pissed off at US foreign policy like millions of others) would be nice.

      How to do this in a usable way is not immediately clear. The situation reminds me a lot of that once described to me by an airline mechanic, who had only the options to mark a part safe for flight or not safe for flight. “Getting worn but still within tolerance; replace at next base with facilities” was not allowed, so a mechanic who spotted something marginal had the choice between grounding a plane immediately or hoping the the next mechanic saw the same thing they did. Which is bad, but try codifying some other version in a safe and rigorous form.

      1. I’ll second this and point to my linked “The Wire” clip above: it would benefit us all to quit ignoring the fact that people make judgement calls all the time! Denying them that option on paper doesn’t mean they dont have to do so in real life.
        (The airline mechanic example you cite is a good one.)

Comments are closed.