Keeping guns away from kids

Having in the past expressed the view that the number of guns in the hands of law-abiding citizens is unlikely to influence the number of crimes committed with guns, honesty compels me to report on some new research by Phil Cook and Jens Ludwig showing that the frequency with which minors carry guns, threaten people with guns, and are themselves threatened with guns seems to depend in part on the prevalence of adult gun ownership in the county, even after controlling for other variables that might connect the two phenomena (such as the overall crime rate).

So if gun control measures could reduce the prevalence of adult gun ownership, they might reduce the incidence of deadly violence among kids. However, that’s a big “if”: the evidence that gun control measures can influence gun possession rates is sparse at best.

I’m left thinking that person-specific gun control policies are likely to be more effective, as well as more politically palatable, than more sweeping attempts to reduce gun ownership. But it now seeems hard to deny that having lots of guns around in the hands of adults makes them more available to children.

Is deceptive spam fraud?

Spam is annoying. But a spam filter will cut down on the volume substantially, and deleting what gets through the filter is usually not an outrageous burden. If I get an email ad for breast enhancement and the subject line is BIGGER BOOBS TODAY, and I’m satisfied with my current cup size, I can delete the message without attending to it for more than a fraction of a second. The same is true when relatives of dead tyrants offer to split the Swiss bank account with me.

Maybe there needs to be a legally enforceable do-not-spam list, or some non-legal approach managed by the consumers’ ISPs (e.g., a tiny per-message charge) to discourage bulk spamming, but it’s a reasonably manageable problem most of the time. (It’s less manageable when I’m on the road, paying by the minute to connect from a hotel phone, and using Webmail, which doesn’t implement filters, over a slow connection.)

Moreover, while the messages are annoying there’s at least an argument that the senders are within what ought to be their rights in asking me whether I want to do business with them.

But a message whose subject line is “Order Confirmation” or “Responding to your request” or “Your phone call from last week” or whose “From:” address is sysadmin@ucla.edu or Microsoft.com is a different proposition. Those messages look as if they can’t be safely ignored. (Well, the sysadmin and Microsoft ones don’t appear that way to me now, but they did the first couple of times.) Even when I know most of them are spam, I can’t just bulk-delete them without sometimes missing a message I wanted to see.

So I wind up opening them before deleting, which takes time: and time, as the old saying has it, is what life’s made of. The senders of those emails are fooling me into spending my irreplaceable time, and attention, on their worse-than-worthless messages. (So are the bulk mailers who send their pitches disguised as checks; fortunately, most of them are too cheap to use first-class mail, and the bulk-mail-rate legend where the postage ought to be is a dead giveaway; no one sends real checks, or real bills, by bulk mail.)

Here, then, is my question: If time has value, then why doesn’t fooling someone out of some of his time constitute fraud? The loss to any one victim from any one email is de minimus, but the aggregate losses to an individual over a year aren’t, nor are the aggregate losses to all the recipients of a mass false-cover spam.

I’m told that under California law it’s only fraud if there’s a loss of personal property. (False-cover spam might still be a deceptive business practice under Cal. Business and Professions Code 17200.) Federal law, by contrast, covers such intangible losses as depriving the public of the honest services of its employees, but the basic mail fraud statute talks of schemes “to obtain money or property.”

Still, assuming that the current defintion of fraud doesn’t extend to defrauding people of their time, why shouldn’t the definition be changed?

The Decline of Prosecutorial Ethics

When DNA identification became available for forensic use, the police and prosecutors loved it, while the defense bar kicked and struggled to keep it out of court as unreliable. I remember my disgust at the Luddism, or pretended Luddism in the interest of keeping guilty people on the street, of the ACLU and the National Association of Criminal Defense Lawyers.

Update and (probable) correction A reader more expert in these matters than I recalls things differently. In his version of the story, which is presumably more accurate than mine, the defense bar’s objections about validity were well-supported factually, and the result was to impose scientifically defensible standards both about how the evidence was to be gathered and analyzed and what testimony could be offered based on it. I’ve asked him for documentation, but my reader is someone I’ve known for years, and I’ve never known him to talk through his hat, so I offer a (provisional) apology to the ACLU and NACDL.

Fortunately, the prosecutors won and the defense bar lost, and the use of DNA evidence is now routine. Naturally, just as the civil-liberties types asserted at the time, some of what comes in to court as DNA forensics turns out to be garbage science offered by incompetent or unduly complaisant technicians, but on balance DNA evidence has been bad for the guilty and good for the innocent.

As a result, some of those innocents, who had previously been convicted, are now being sprung, though in almost all cases with no, or negligible, compensation for the wrong done them.

[The high rate of demonstrated innocence in some categories of cases, especially stranger rapes, ought to have led to more soul-searching than it has about the reliability of the eyewitness identifications and jailhouse-snitch testimony that almost invariably “proved” that someone was guilty whom we now know, more or less for certain, wasn’t.

[It’s my guess that about 2% of the people sent to prison actually didn’t do the act they were convicted of having done. That’s about 10,000 people a year. Now a 98% specificity rate in a process run by human beings isn’t too shabby, but that’s still a lot of ruined lives. There’s always some sort of tradeoff between crime prevention and accuracy; the only way to never punish an innocent person is never to punish anyone at all. But we could keep the prisons full if we only imprisoned those actually known beyond reasonable doubt to be guilty, and in my view the crime-control losses incident on tightening up somewhat on the standard of proof would be slight.

[One way to think about it is that we can take some of the gains from new technology in the form of reduced risks of convicting the innocent, even as we take most of them in the form of improved chances of catching the guilty. In some instances, such as photo lineups, current practices are known to produce more identifications than alternatives, with the difference consisting almost entirely of false positives. [*] Where the police won’t make the necessary changes, it’s up to the prosecutors to pressure them by announcing in advance that after some date they won’t offer evidence not gathered in the most reliable way.]

Now that the positions are reversed, it’s the defense bar yelling for the right to reopen old cases on new evidence, and the prosecutors — in most cases, prosecutors who offered inferior-quality biological forensics at trial and used them to help get convictions — screaming “finality of verdict” and fighting as hard as they can to keep innocent people locked up. [*]

What most non-participants don’t understand about the criminal law is that an appeal isn’t supposed to be a fresh review of the evidence: it’s almost exclusively about errors made at trial. As a matter of law, the fact that someone convicted in due form is factually innocent of the charge is not, in general, a reason to let him out of prison. (Justices Scalia and Thomas have argued [*] that the execution of a factually innocent person would not constitute a Constitutional violation.) So the ability of demonstrably innocent people behind bars to have their cases reopened depends on state law and state rules of criminal procedure, and the prosecutors have had considerable success in opposing such attempts both in the legislatures and in the courts. (Most horribly, some states explicitly allow police to destroy the evidence after some period; the proof of innocence in some case may be literally going down the drain as I write.)

While I thought the defense bar’s earlier position outrageous, the prosecution’s current position is incalculably worse. It should go without saying that keeping innocent people in prison is not a means of crime control. Of course, any organization hates to admit error, but this reflects something uglier: the degeneration of the traditional prosecutor’s ethic, which held that “the government wins its case whenever justice is done,” into a merely adversarial, notches-on-the-belt mentality. No doubt the rightward swing on criminal justice issues, combined with the fact that District Attorneys are elected, has something to do with it. But that doesn’t make it one whit less disgraceful. No self-respecting prosecutor should ever tell a court, “We think it’s not quite certain that this person is innocent, so we propose to keep him in prison.” (What’s really amazing is that the victims and their families as often as not want to keep the matter “closed,” as if having the wrong person punished were a pretty good substitute for having the right person punished.)

In addition to being disgusting, prosecutorial stubbornness about freeing the wrongly convicted gives the rest of us a really bad message about the ethical standards prosecutors operate on in other domains. In the post-9-11 world, there is a genuine need — not as big a need, perhaps, as John Ashcroft would have you believe, but a genuine need nonetheless — to rethink some aspects of the criminal process in terrorist-related cases. Inevitably, that means giving prosecutors more power, and thus putting more trust in them. This is, therefore, an especially bad time for them to be acting in an untrustworthy fashion.

I’d like to hear some career prosecutors — a group that includes some of the finest human beings I have ever met — speak out more loudly on this topic.

Update: More here [*] from TalkLeft.

A Breakthrough in Reducing Recidivism

I continue to get email from supporters of Charles Colson, who keep explaining to me (clearly I’m a slow learner) that it’s perfectly OK to define the “graduation” requirements of a program to include lots of accomplishments that strongly correlate with not committing any more crimes, and then use the fact that “graduates,” so defined, don’t commit many crimes as evidence that the program was a success.

If so, then I have great news to report. I have developed a low-cost program that is 100% guaranteed to reduce recidivism.

Here’s the program: We’ll take a group of people getting out of prison and instruct them (1) to take a Vitamin E capsule every day and (2) not to commit any crimes. We’ll call those who follow the instructions for two years (they tell us they’re taking their vitamins, and that they haven’t committed any crimes, and they haven’t been arrested) “graduates,” and the others (the group consisting of anyone who says he’s stopped taking his vitamins, or admits committing a crime, or has been arrested) “dropouts.” Then we’ll follow the graduates, and the control group, for another year, and see which group gets arrested and re-imprisoned more often.

Bet you anything you like that our graduates do better than random, better even than the graduates of IFI. Because Vitamin E and a warning to stay out of trouble constitutes an effective program? No. Because not getting arrested for two years is a very good predictor of not getting arrested in the third year, just as holding a job and belonging to a church are very good predictors of not getting arrested.

How would we know that the Vitamin E cure was “working” merely through selection effects, rather than having some real impact? Because we would find that the dropout group had a much higher re-offense rate than the controls, just like the dropout group in the IFI study. That’s the telltale sign of “cherry-picking.”

And that’s why an honest evaluation has to follow the drop-outs as well as the graduates. It’s not that we expect the drop-outs to improve; but when we find that the dropout group actually does worse than the control group, then we have to ask whether some or all of the reported difference between graduates and controls reflected selection effects rather than any actual impact of the program.

Previous post, with links, here. [*]

More Faith-Based Fudge from Charles Colson

Charles Colson pretends to misunderstand * my criticism in Slate * of his claim that his Bible-centered prison program reduced recidivism. In that essay, and in a follow-up published in this space * several days before Colson’s response, I pointed out that studying only the successful completers of a program does not allow a valid inference that the program actually worked, as opposed to merely “cherry-picking” those who would have succeeded anyway.

The methodological point, though a little complex when explained in words, is no more controversial among those who do empirical social science than the fact that the Earth orbits the Sun is controversial among astronomers.

I am reasonably hopeful that anyone who carefully reads what I had to say will follow its logic and not need to rely on any external authority. Anyone with the energy to do so can look up “selection effects” in the index of any textbook on social-science research methods. The fact that Colson dances around is that the dropouts from IFI did much worse than the control group. If the graduates had done better than the controls, and the dropouts no worse, then it would be reasonable to interpret the gains among the graduates to the effects of the program. But unless IFI somehow damaged the people who started it but did not complete it, then the fact that the non-graduates did much worse, and the group as a whole no better, than the controls suggests that a selection effect was at work: the program screened out the bad risks, making the graduates look artificially good.

[Another issue I didn’t raise before becomes relevant because of Mr. Colson’s assertion that IFI rescued chronic recidivists from lives of crime. The return-to-prison rate among the control group was only 20%, compared to the 50-60% found in most studies of prison releasees. That’s consistent with the screening criteria, which called for prisoners who were otherwise assigned to minimum security. Obviously, this was a fairly light-duty group of offenders in the first place.]

But the technical details of this sort of argument are always frustrating for non-experts to try to follow. After all, at first blush, both Colson’s verbal formulation of the problem (No program works for those who don’t stick with it, so studying those who complete the program is not only legitimate but virtually inevitable) and mine (A prison program that counts only those who get jobs as having “completed” it will always look good, because getting a job is a good predictor of staying out of trouble, so you can’t tell from studying completers only whether the program actually worked) seem perfectly reasonable.

Those too rushed to try to wrap their heads around the mathematics of selection bias might want to consider some of the external evidence that suggests that, in this instance, one ought to believe me and not Mr. Colson (other than the fact that his response never addresses my careful analysis of the methodological point):

1. This is what I do for a living, and getting it wrong would be a professional disgrace. Mr. Colson’s occupation does not involve expertise in empirical methods.

2. Mr. Colson mentions a study * by Byron Johnson of the University of Pennsylvania. But, although Dr. Johnson’s study was paid for by Colson’s organization, Mr. Colson does not quote Dr. Johnson as agreeing with him or disagreeing with me on the issue on which we differ. Indeed, the claim that the Prison Fellowship made, and that I criticized, is not made in Dr. Johnson’s study. (Nor has Dr. Johnson responded to my inquiries on the subject, which started more than a month ago.)

3. My essay in Slate quoted John DiIulio, a former board member of Mr. Colson’s organization and the founder of the center at Penn where Dr. Johnson’s study was done, as agreeing that the results Dr. Johnson reports do not support the claim Mr. Colson makes, though Prof. DiIulio adds that no one study can show conclusively that a program worked or didn’t work. Surely Mr. Colson doesn’t doubt that Prof. DiIulio (who has been tenured at both Princeton and Penn) understands the research question involved, and presumably he wouldn’t include Prof. DiIulio — who ran the White House faith-based programs office at the beginning of the current administration — with me in his category of “people whose objective is to score points against the President.”

4. I can also report — though perhaps Mr. Colson would disbelieve me — that of the large volume of email I received after my essay was published, the tiny fraction that came from people with professional training in the social sciences was uniformly supportive. (One correspondent, a teacher, said that he planned to use my essay as a case study for a course in research methods.)

5. Moreover, while the contents of my email in-box are not publicly verifiable, the contents of the Blogosphere are. Slate is widely read among bloggers, and several, including Eugene Volokh * and Kevin Drum, linked to it. That means that many people competent to criticize my analysis were aware of it. I would almost certainly know if any criticism had been posted, and as far as I am aware none has been.

A number of my correspondents asserted that concentrating on the statistics ignores the human element, and the spiritual element, of the process of conversion. No doubt that is true. But the claim made by Mr. Colson’s group, and by the White House, was that the analysis done at Penn had statistically demonstrated the efficacy of IFI in reducing recidivism. My essay addressed the merits of that claim, and not the very different question of the value of Christian missionary efforts, whether in prison or elsewhere. Having made a claim about what the numbers show, Mr. Colson can reasonably be held to scientific and not faith-based standards of evidence.

Therefore I claim that an unbiased observer ought to believe that, in this instance, I am right and Mr. Colson wrong. And if so, then Mr. Colson must now be engaging in deliberate deception, though he might not have been when his organization first claimed statistical “success” for its program. Once his claim had been challenged, Mr. Colson could have known the truth of the matter, if he wanted to, by asking people he knows and trusts who are competent to judge.

It should not be neccesary to remind Mr. Colson that both knowing the truth and speaking the truth are activities highly spoken of in Scripture.

Update I announce my low-cost, guaranteed-to-work recidivism-prevention program. [*]

Chuck Colson and the Starfish Principle

Before getting down to serious business regarding my Slate essay (*), three corrections on factual points:

1. The original version described the program as “fundamentalist.” It turns out that the Prison Fellowship is on the “evangelical” side of the fundamentalist/evangelical divide, a distinction I was aware of but don’t quite fully understand. Many of those who call themselves “evangelicals” regard “fundamentalist” as a term of abuse. I should have been more careful, and I apologize to anyone who was offended. The error has been corrected on Slate.

2. The document I quoted about faith appears in the King James Version as “The Epistle of the Apostle Paul to the Hebrews.” But apparently modern scholars, and more contemporary Bible translations, reject that traditional attribution. I’m happy to be corrected on that point.

3. The difference between the experimentals and the controls isn’t statistically significant, so saying that the experimentals did “somewhat worse” can’t really be supported. All one can say with confidence is that the Penn study does not provide support for the claim made by PF and the White House that IFI reduced crime among its participants.

Most of the correspondence from my Slate essay that wasn’t merely vituperative concerned the issue of selection bias. The objections came in two forms, equivalent statistically but not emotionally.

Several of my pen-pals used the parable of the starfish: On a beach where several thousand starfish lie stranded after a storm, a little boy is picking them up, one by one, and throwing them back into the water. A grown-up says to him, “What you’re doing is very nice, but it can’t possibly make a difference to all these starfish.” The boy nods, picks up another starfish, throws it into the ocean, and says, “Made a difference to that one.” If IFI helped some prisoners, why criticize it for not helping other prisoners?

The other form of the same objection is that, since no treatment works on those who don’t get it, it’s logical to measure results on completers only rather than all attempters. (One of those making this objection was, rather frighteningly, an MD engaged in cardiology research.) Actually, the medical analogy, which several of my correspondents invoked, is a good one, and perhaps a numerical example will help:

Imagine a disease, and a proposed treatment for that disease. We want to know whether the treatment works. What experiment should we do, and how should we interpret the results?

Take 2000 people with the disease. Randomly select 1000 of them as “experimentals,” leaving the other 1000 as “controls.” The experimentals get offered the treatment; the controls we just observe.

Now say that, of the 1000 controls, 100 get better. That’s a recovery rate of 10%. That’s the target the treatment has to beat to convince us that it works.

Assume that half of the experimentals accept the treatment and follow through to the end. So we have 500 “completers” (or “graduates,” in the IFI context). The other 500 are “drop-outs.”

Now imagine that 75 of the 500 completers recover. That’s a recovery rate of 15%. The treatment worked!

Wait. Not so fast. We need to look at the dropouts. If 50 of them recovered, the same rate as the control group, then we can say the treatment effect was real: 125 of the experimentals, but only 100 of the controls, got better. But what if only 25 of the drop-outs recovered? Then what would we say?

If we look at the whole group of 1000 people who were offered treatment, 100, or 10% of them, recovered, the same as in the control group. So being offered the treatment didn’t do anything to improve the recovery rate. Something’s wrong here. We’d have to say that something caused the people in the experimental group who were more likely to recover to also stick with the treatment. (Perhaps people who start to feel better have more energy to continue.)

It’s not that we’re blaming the treatment for the bad outcomes of the dropouts: it’s just that we’re noticing that its seemingly higher cure rate came from cherry-picking its participants rather than actually curing them.

The only other possibility, assuming that the original randomization succeeded in producing similar groups, is that the treatment helped some people and actually hurt others. That’s possible in the medical context. I suppose it’s conceivable that IFI actually made some of its participants worse, but it’s not obvious how that would happen. Selection effects are a much more likely explanation for the actual pattern of results. Anyway, how happy would we be with a program that hurt more people than it helped?

[One sophisticated correspondent suggested that the volunteers for IFI might have included a higher proportion of manipulative inmates, and that therefore the two groups weren’t really matched. That could be true, though at first guess you’d think that people who volunteered would be disproportionately those who really wanted to turn their lives around. But at best that “negative selection effect” theory means that we’re not sure the program failed; as mere speculation, it can’t justify a claim that the program succeeded. The same applies to claims that it might lead to better outcomes after the study period, or to gains other than reductions in crime. They’re all conceivable, but there’s no evidence for them, and the claim made was that the program was a proven method of crime control.]

If IFI had succeeded according to “just-one-starfish” rules — helping some while not hurting others — it could reasonably claim success. But it didn’t, according to the numbers its advocates put out.

Several people asked, rather huffily, if UCLA uses all its matriculants, rather than only its graduates, when it advertises the success rates of its students. Good question. I probably don’t want to know the answer.

Many of the extreme claims made for the job-market benefits of higher education, and especially of elite higher education, are merely applications of selection bias. (One of my colleagues at Harvard used to claim that the institution’s operating principle was to select the best, get out of the way while they educated one another, and then claim credit for their accomplishments.) Scholars who have worked hard to overcome the selection-bias problem report that, even correcting for that, higher education still has a respectable rate of return in financial terms.

I’m still skeptical, because some of those increased earnings for graduates presumably come at the expense of the people they beat out for jobs; the social rate of return must therefore be less than the private rate, and it’s conceivable that the marginal social rate of return to higher education, in purely financial terms, is negative. Whether the non-financial benefits of higher education are large enough to compensate would be a different, and even harder, question.

But, yes, an honest study of the effects of higher education would have to look at drop-outs as well as graduates.

I was also accused of “hypocrisy” for liking literacy programs despite the absence of true random-assignment studies. There are good logical reasons to think that boosting reading scores improves job-market opportunity, and job-market success reduces recidivism. There is no doubt that adults with low reading scores can be taught to read better at relatively low expense. So I have fairly good confidence that prison literacy programs work, and are cost-effective compared to other means of crime control.

But note that my conclusion was not that we should launch a massive program of prison education, but that we ought to run the experiment. If it came out negative, on a true random design including the dropouts, I’d be disappointed and a little surprised, but I’d have to say either “I give up” or “Back to the drawing-board.” I wouldn’t tell fairy-tales about how it really worked, if you only look at the people it worked for.

Some of my readers wanted to take this as a matter of perspective, or of opinion. Sorry. It’s not. It’s a matter of black-letter statistical method, something that could be on the exam in any first-year methods course. Each of us gets to choose a viewpoint, but we all have to work from the same facts.

Prison literacy programs as crime control

Since crime competes with licit work for the time of ex-offenders released from prison, and since literacy contributes statistically both to the chance of getting a job and to the wages available, it stands to reason that improving the reading skills of inmates would tend to reduce their recidivism. That appears to be the case; graduates of prison literacy programs are about 20% less likely than otherwise similar non-graduates to return to prison.

Since the cost of a typical prison literacy program is only about $1000 per inmate, if that difference reflected a real program effect prison literacy programs would be among the most cost-effective crime-fighting techniques, preventing serious crimes at a cost of about $2000 per crime averted. Preventing crime by just building more prisons costs at least half again as much.

[I learned about this from my students Audrey Bazos and Jessica Hausman, who wrote a prize-winning master’s project on the topic.*]

Moreover, even a modest reduction in reincarceration, far smaller than the published estimates, would make literacy programs better than a break-even proposition in purely fiscal terms. The cost saving to a typical state the state from avoiding a two-year prison term is between $40,000 and $50,000; a program that costs $1000 only needs to reduce the recidivism rate by two or three percentage points to pay for itself in reduced reincarceration costs alone.

But in the absence of a true random-assignment experiment, it’s hard to know how much of the measured difference in recidivism between program graduates and other inmates reflects the factors that lead some inmates, but not others, to enter literacy programs and stick with them, rather than the effects of the programs themselves.

An experimental test of this idea would be conceptually simple and operationally manageable. Identify a group of, say, 400 serious offenders within a few months of scheduled prison release. Randomly identify half of them, and use some combination of program availability, persuasion, and incentives to attempt to raise the participation of that group, but not of the control group, in literacy training. Use automated criminal history systems to track reincarceration, and compare the two groups: not just the graduates of the literacy program, but everyone in the group randomly selected for aggressive “marketing” of literacy training.

For example: in California prisons, educational programs and work programs are scheduled for the same time slots. Prisoners who don’t get money from their families or friends — who tend to be the poorest, most socially disconnected, and least literate, and who have therefore among the highest recidivism rates — need to work for canteen money, and therefore can’t take reading classes. By selecting a group of such inmates, dividing it randomly in half, and offering to pay the inmates in one half the pennies per hour they would otherwise be earning in prison jobs to learn to read instead, one could boost participation in the “experimental” group and not in the “control” group, without the moral onus of assigning anyone to a no-treatment condition.

Even if operational or ethical concerns made random assignment infeasible, it would be possible to design interventions to raise the rate of literacy-training participation in one or a few prisons, and use as controls inmates of other, similar prisons or releasees from those same prisons in the period before the program started.

With somewhat greater difficulty and expense, the outcome measures could be expanded past reincarceration to cover workplace and family functioning, physical and psychological health, and self-reported crime.

One reason for the rather sad performance of most in-prison rehabilitation efforts is that they focus on behavioral change. In creating behavioral change, environment matters: talking to someone about not using drugs when he’s in prison may have little effect on his drug use when he’s out of prison. Because literacy is a set of skills rather than a set of behaviors, it seems likely to be much more portable.

It is a sad fact about American politics that “crime” as a political issue has very little to do with figuring out ways to actually reduce victimization or the criminal riskiness of various social environments. Even if it were true, as I think it probably is, that putting a million dollars into prison literacy programs would prevent substantially more crimes than putting that same million dollars into locking up another forty inmates for a year, a politician who preferred literacy programs to prison construction would bear the politically devastating label “soft on crime.”

[Mitch Daniels, before he started running for governor of Indiana, when he was still helping GWB wreck the fiscal stability of the federal government as his first OMB Director, specified the teensy program of Federal aid to state prison literacy programs as one of the things that ought to be cut to restore balance to the budget. (*) Anyone who can close a twelve-figure deficit by cutting a seven-figure program shouldn’t be wasting his time in the public service; he should be out there in the private sector running Ponzi schemes. But of course the point of his proposal wasn’t to save money, but to indicate his dedication to the holy cause of making bad people suffer.]

So it would be fatuous to imagine that just showing that funding prison literacy programs is cost-effective crime control will magically make those programs popular.

Still, there is some benefit to knowing whether prison literacy is as good a deal socially as it appears to be. Anyone who knows of a corrections official who might let his or her institution be used as an experimental venue will earn my gratitude by putting me in touch with that person.

Latest research from Pofessor Harold Hill

Here’s a sure-fire method for producing a “successful” program: measure your successes, and ignore your failures. Works every time. What’s astonishing is how easy it is to get some academic to write it up, how willing the newspapers are to report the resulting “study” as if it contained actual information, and the many politicians will then cite your “success” as scientifically documented fact.

The latest incarnation of this particular confidence trick is from Chuck Colson’s Prison Fellowship, which, thanks to the generosity of then-Governor George W. Bush, runs its own prison (for born-again Christians only) in Texas, and now has similar programs in Iowa, Minnesota, and Kansas. A report from the University of Pennsylvania’s Center for Research on Religion and Urban Civil Society found that graduates of the program (called InnerChange) were only half as likely as matched controls to return to prison. Or so we are told in a press release from the Prison Fellowship. The White House gave Colson a nice photo-op with Bush, and Ari Fleischer said “This is an initiative that the President believes very deeply in to help reduce recidivism in our federal prisons and prisons everywhere.” Religion News Service picked up the report, and of course the editorial page of the Wall Street Journal is also enthusiastic, taking its obligatory swipe at “liberals” who want to keep God from rescuing sinners.

Here’s the way the study worked. The researchers took a group of 171 prisoners who entered the InnerChange program, and found then selected the records of a group of other inmates that met the selection criteria but didn’t enter. The comparison group was selected to match the program entrants on race, age, offense type, and something called the “salient factor score,” (SFS), a standard measure of recidivism risk. Then the post-release criminal behavior the graduates of the InnerChange program was compared to that of the matched controls.

Veeeeeeerrrrrrrrryyyyyyy zzzzzzzientifick, nicht war?

But completely bogus. Not only were the entrants to the program a self-selected group, which means that in some important ways (such as a desire to change their lives) they weren’t actually matched to the comparison group, but it was only the graduates — 75 of the 177 entrants — who showed better behavior than the pseudo-control group. Comparing all of the entrants (including those who dropped out, were kicked out, or got early parole) to all of the comparison group, the difference in recidivism reverses: the InnerChange group was slightly more likely to be rearrested (36.2% versus 35%) and noticeably more likely to actually go back to prison (24.3% versus 20.3%).

In other words, those who succeed, succeed, while those who fail are likely to fail. Whodathunkit?

Don’t get the impression that the Prison Fellowship is unusual in hyping its numbers this way. Most of the drug treatment literature (the stuff the people wearing the “Treatment Works” buttons keep shoving at you) works the same way, as a National Academy study of a couple of years ago rather rudely pointed out.

The self-selection problem is a really hard one for social scientists go get around: as an ethical matter, you can’t randomly assign people to receive different treatments without getting their informed consent in advance. Anyway, it’s quite plausible that even a good program will only work for the people who want it, so there’s no point in randomly assigning people who just aren’t interested.

If there are more volunteers for a given treatment than there are program slots, then you can invite people to volunteer and tell them up front that there will be a lottery to get in. But sometimes that won’t work, and you just have to match on the observables, hope the resulting distortion isn’t too great, and tell your readers to be cautious in interpreting your resuls.

But there’s no excuse for cherry-picking by comparing those who make it througha program with a group matched to all of those entering the program. That’s just cheating. The only legitimate way to analyze the data is to keep everyone selected for the program in the study, regardless of how long they stay in the program. (This approach is called “intention-to-treat” analysis, a carry-over from its roots in medical-outcomes research.)

Nor is there any excuse for reporters regurgitating this pap without checking with the people who know better. (Finding someone who hates the program on ideological grounds to describe the findings as “junk science,” as the religion News Service did, doesn’t count.)

“So how,” I hear you ask, “does anyone get away with this shell game?” The answer is the same as for any sort of bamboozlement: it only works on people who, at some level, want what you’re trying to convince them of to be true. As Machiavelli didn’t quite say (but one of his translators said in his name), “men are so simple, and so driven by their needs, that whoever wishes to deceive will find another who wishes to be deceived.” The Prince, Chapter XVIII

Manuel, the Cabellian anti-hero, put it more succinctly, in the Latin proverb he borrowed as the motto of his house: Mundus vult decipi.

Update Just for a change, I have a constructive suggestion for something you can do about it.