More Faith-Based Fudge from Charles Colson

Charles Colson pretends to misunderstand * my criticism in Slate * of his claim that his Bible-centered prison program reduced recidivism. In that essay, and in a follow-up published in this space * several days before Colson’s response, I pointed out that studying only the successful completers of a program does not allow a valid inference that the program actually worked, as opposed to merely “cherry-picking” those who would have succeeded anyway.

The methodological point, though a little complex when explained in words, is no more controversial among those who do empirical social science than the fact that the Earth orbits the Sun is controversial among astronomers.

I am reasonably hopeful that anyone who carefully reads what I had to say will follow its logic and not need to rely on any external authority. Anyone with the energy to do so can look up “selection effects” in the index of any textbook on social-science research methods. The fact that Colson dances around is that the dropouts from IFI did much worse than the control group. If the graduates had done better than the controls, and the dropouts no worse, then it would be reasonable to interpret the gains among the graduates to the effects of the program. But unless IFI somehow damaged the people who started it but did not complete it, then the fact that the non-graduates did much worse, and the group as a whole no better, than the controls suggests that a selection effect was at work: the program screened out the bad risks, making the graduates look artificially good.

[Another issue I didn’t raise before becomes relevant because of Mr. Colson’s assertion that IFI rescued chronic recidivists from lives of crime. The return-to-prison rate among the control group was only 20%, compared to the 50-60% found in most studies of prison releasees. That’s consistent with the screening criteria, which called for prisoners who were otherwise assigned to minimum security. Obviously, this was a fairly light-duty group of offenders in the first place.]

But the technical details of this sort of argument are always frustrating for non-experts to try to follow. After all, at first blush, both Colson’s verbal formulation of the problem (No program works for those who don’t stick with it, so studying those who complete the program is not only legitimate but virtually inevitable) and mine (A prison program that counts only those who get jobs as having “completed” it will always look good, because getting a job is a good predictor of staying out of trouble, so you can’t tell from studying completers only whether the program actually worked) seem perfectly reasonable.

Those too rushed to try to wrap their heads around the mathematics of selection bias might want to consider some of the external evidence that suggests that, in this instance, one ought to believe me and not Mr. Colson (other than the fact that his response never addresses my careful analysis of the methodological point):

1. This is what I do for a living, and getting it wrong would be a professional disgrace. Mr. Colson’s occupation does not involve expertise in empirical methods.

2. Mr. Colson mentions a study * by Byron Johnson of the University of Pennsylvania. But, although Dr. Johnson’s study was paid for by Colson’s organization, Mr. Colson does not quote Dr. Johnson as agreeing with him or disagreeing with me on the issue on which we differ. Indeed, the claim that the Prison Fellowship made, and that I criticized, is not made in Dr. Johnson’s study. (Nor has Dr. Johnson responded to my inquiries on the subject, which started more than a month ago.)

3. My essay in Slate quoted John DiIulio, a former board member of Mr. Colson’s organization and the founder of the center at Penn where Dr. Johnson’s study was done, as agreeing that the results Dr. Johnson reports do not support the claim Mr. Colson makes, though Prof. DiIulio adds that no one study can show conclusively that a program worked or didn’t work. Surely Mr. Colson doesn’t doubt that Prof. DiIulio (who has been tenured at both Princeton and Penn) understands the research question involved, and presumably he wouldn’t include Prof. DiIulio — who ran the White House faith-based programs office at the beginning of the current administration — with me in his category of “people whose objective is to score points against the President.”

4. I can also report — though perhaps Mr. Colson would disbelieve me — that of the large volume of email I received after my essay was published, the tiny fraction that came from people with professional training in the social sciences was uniformly supportive. (One correspondent, a teacher, said that he planned to use my essay as a case study for a course in research methods.)

5. Moreover, while the contents of my email in-box are not publicly verifiable, the contents of the Blogosphere are. Slate is widely read among bloggers, and several, including Eugene Volokh * and Kevin Drum, linked to it. That means that many people competent to criticize my analysis were aware of it. I would almost certainly know if any criticism had been posted, and as far as I am aware none has been.

A number of my correspondents asserted that concentrating on the statistics ignores the human element, and the spiritual element, of the process of conversion. No doubt that is true. But the claim made by Mr. Colson’s group, and by the White House, was that the analysis done at Penn had statistically demonstrated the efficacy of IFI in reducing recidivism. My essay addressed the merits of that claim, and not the very different question of the value of Christian missionary efforts, whether in prison or elsewhere. Having made a claim about what the numbers show, Mr. Colson can reasonably be held to scientific and not faith-based standards of evidence.

Therefore I claim that an unbiased observer ought to believe that, in this instance, I am right and Mr. Colson wrong. And if so, then Mr. Colson must now be engaging in deliberate deception, though he might not have been when his organization first claimed statistical “success” for its program. Once his claim had been challenged, Mr. Colson could have known the truth of the matter, if he wanted to, by asking people he knows and trusts who are competent to judge.

It should not be neccesary to remind Mr. Colson that both knowing the truth and speaking the truth are activities highly spoken of in Scripture.

Update I announce my low-cost, guaranteed-to-work recidivism-prevention program. [*]

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact:

Comments are closed.