Metrics vs. metrics

The federal government requires many “metrics” from agencies it funds. Much of these data are costly to collect and don’t do much good.

Over at Ezra Klein’s blog, Senator Kent Conrad is quoted to say the following about job training:

There were over 40 different job training programs, with very little coordination between them, and different definitions of who was eligible. And there were almost no metrics on any of them.

This attracted one reader’s ire, who responds, in part,

I am a grant-writer and program manager at a nonprofit that receives many federal grants. WE LIVE AND DIE BY METRICS! Almost to the point, in my opinion, of spending too much money for compliance and performance-measurement systems, and not enough money on the actual training. In rough terms, we hire one full-time compliance and performance staff person for every 10 staff who actually work with people. All of the federal job training programs have performance metrics. In fact, many have started using the so-called Common Measures, a simplified, standardized system for measuring outcomes across different agencies…..

Different agencies use additional measures, and yes, those get complicated. And I’m not saying these systems are perfect. But the federal government has become increasingly better, ever since the passage of GPRA, at using metrics. It drives me a little nuts to spend so much time out here in the field collecting, analyzing, and reporting performance metrics, and then to have a politician claim that “there were almost no metrics on any of them.” No, simply not true. They’re just ignorant of the data that is available….

One last note, in case you really want to dive into this issue: In some programs, future funding gets tied to performance, creating an unhealthy obsession with the metrics. For example, programs start “creaming,” or accepting only clients that are somewhat more likely to succeed. … Grantees like my organization are trying to serve people who are most in need, and we’re caught between these daunting skill deficiencies in our clients and the rigorous performance standards we must meet in order to keep our jobs. For someone like Conrad to claim that there are no metrics — he misses the point entirely.

Anyone who has participated in program evaluation finds much to embrace in both perspectives. Indeed, there is really no contradiction between Conrad’s and the anonymous letter writer’s perspectives. They are using the word “metric” in two different ways.

The federal government and other funders indeed require an incredible profusion of program activity data. These data are useful to document that you’ve actually spent their money to deliver services, and to characterize the people you have served. Such performance data–metrics, if you will–are collected in nice binders and are placed on the shelf, where they generally reside, blissfully undisturbed. I’ve produced a few of these binders myself.

These binders don’t get much use because they can’t really tell policymakers such as Senator Conrad what’s actually working and how we should target (say) job training resources to do the most good. (Indeed there’s probably more good evaluation data regarding job training than there are in other important areas such as crime control.) Funders still spend impressive amounts of blood and treasure collecting these data, sometimes imposing perverse incentives or excessive burdens on service providers along the way.

It’s frustrating that these same funders rarely support or finance the kind of rigorous program evaluation that might actually answer these questions, and thereby do some good.

Postscript: Yeah, I just outed myself. I was initially reluctant to admit that I am a producer of said binders….

Author: Harold Pollack

Harold Pollack is Helen Ross Professor of Social Service Administration at the University of Chicago. He has served on three expert committees of the National Academies of Science. His recent research appears in such journals as Addiction, Journal of the American Medical Association, and American Journal of Public Health. He writes regularly on HIV prevention, crime and drug policy, health reform, and disability policy for American Prospect,, and other news outlets. His essay, "Lessons from an Emergency Room Nightmare" was selected for the collection The Best American Medical Writing, 2009. He recently participated, with zero critical acclaim, in the University of Chicago's annual Latke-Hamentaschen debate.

2 thoughts on “Metrics vs. metrics”

  1. We refer to this as a DRIP situation: Data Rich, Information Poor. Unfortunately, turning that data into information is a nontrivial activity that is generally not funded. I spent a year trying to take the data collected in Medicaid and make information from it. I was somewhat successful, but not nearly as successful as I wanted to be.

    It's not simply the volume of data, it's having someone around to ask the right questions and someone else to help navigate the mutually incompatible data formats.

  2. Politicians like Conrad should not be assumed to be arguing in good faith. You can always attack a programme for insufficient evidence of effectiveness; and the targets (honest managers) cooperate as they conscientiously admit that the evidence is indeed imperfect, flawed by Goodhart's Law etc. The demand for more and more evidence is a clever way of smothering any programme whatever. It's never applied to dubously effective programmes the politicians support, like air passenger screening. Conrad should be challenged: what fair and broadly applied tests of effectiveness would make you support these programmes?

Comments are closed.