Depersonalized Admissions Processes?

Over at TNR’s The Plank blog, Jason Zengerle suspects that there is a linkage between the Virginia Tech shooting and the university’s depersonalized admissions process. Drawing on an article in the WaPo, he believes that, “the Virginia Tech massacre provides another good argument in favor of de-depersonalizing the college admissions process.” I find this a baffling argument. There are a lot of hidden assumptions here. Here they are:

a) Interviews and letters of recommendation would have identified the shooter’s unhinged psyche.

b) The process that did so would not have produced a large number of false positives, leading people who weren’t going to do anything worse than sulking in the back of class to be rejected.

c) Even if (b) was not the case, this assumes that the shooter wouldn’t have simply received admission to some other university, where he would have done the same thing.

It may very well be that there are some identifiable markers of a tendency to this sort of violence. My guess is that, as (b) suggests, the problem with these indicators is that they are overbroad–the same factor that might have identified the shooter would have also identified a large number of others who would not produce any danger to anyone. But the big point is that this only really works if every academic institution does the same thing, all the way down to VA community college system. Even in the best of cases, all that would have happened as a result of hiring a ton of admissions officers would have been that the shooter would have killed people at VCU or Albemarle Community College, rather than Virginia Tech. Maintaining a large admissions apparatus may make sense for a quasi-privatized institution like UVA–just a step down from there, the costs rapidly become prohibitive, without adding much if any value.

If anything might have prevented this, it would have been closer supervision of students once they were on campus–trying to stop this by more careful selection of students is an obvious non-solution.

UPDATE

I received a very smart response from one of our top-flight readers earlier today. I think our blog-community will find it of interest:

The problem you identify—the “indicators are overbroad”—seems to be a pervasive and important one in public policy, but I never hear people talk about it in consistent terms. In nuclear physics, we always talk about this in terms of “signal efficiency” and “background rejection” of a “cut”. You’ve got some mixed dataset (say: the hugely-common background muons, and the rare signal neutrinos, in a neutrino detector), and you want to make sure that the muon events don’t make it into the neutrino machinery. The two parameters you study are the “background rejection efficiency”, and the “signal acceptance”; you try to find some proxy-variable which will give you a “yes/no” answer on whether to sort any given event into the muon machinery or the neutrino machinery. And you want to choose that proxy in such a way that you maximize the signal acceptance *and* the background rejection.

If there’s one thing I wish I could communicate to policymakers, it’s the following idea: In the absence of extra information, you can’t improve the rejection without hurting the acceptance. This is why you can’t design a tight but hassle-free airline-security checkpoint, or a free-rider-less welfare system, or a fair-to-everyone Alternative Minimum Tax, or a fair/safe way of handling mentally ill students. They’re all systems based on some continuous variable (“suspiciousness”, “deservingness”, “income”, and “dangerousness”), which is a crude proxy for the real issue at hand (“wants to crash planes”, “ought to be working”, “overexploits deductions”, and “will hurt self/others”). Any time you take action based on the proxy, you’ll make *both* kinds of mistakes; you’ll have non-100% rejection and non-100% efficiency. For example, at an airport checkpoint, sometimes a scruffy backpacker will trigger a higher degree of suspiciousness than a well-groomed terrorist. Therefore, telling me horror stories about “this backpacker got super-harrassed” and “this terrorist strolled right through” tells me *nothing* about whether our security is too high or too low; it doesn’t even tell me that the screeners screwed up. It only tells me that the current “suspiciousness” criterion isn’t perfect. It tells me that “real terrorists” have a range of “suspiciousnesses”; that “non-terrorists” also have a range; and that these ranges overlap. Nevertheless, these sort of anecdotes seem to drive a lot of public policy choices, at least at the level I can see.

I don’t think that the public even has reasonable language in which to discuss these effects; I am curious to know how policymakers address them. I know epidemiologists use “errors of the first kind” and “errors of the second kind” and such, but no one will ever plow through a newspaper article or a blog post peppered with those terms. The high-energy physics language of “cuts” and “efficiencies” isn’t the best option, either. Is it a lost cause to hope that the public could think about issues this way? Do we just need better language and framing?

DOUBLE REVERSE UPDATE

The great, sainted, all-knowing, all-seeing Andy Sabl responds to my post (and the update above) thus:

You’re right about admissions procedures. Want to get rid of similar massacres? Ban similar guns. The Economist made, as it always makes, a very simple argument: there are people just as crazy in Britain, but no massacres, because no personal guns. (It used to be that one was many times, maybe three times, as likely to get into a fight in a bar in Britain as in the U.S.–and about ten times as likely to get killed in the U.S., because barfighters in the U.K. had knives at most.)

On the bigger point: your physicist friend is right on the analysis but seems overly pessimistic regarding pedagogical frames. What he’s talking about is of course “Type I” vs. “Type II” errors (not to mention the common “Type III” error, which is forgetting which of those is which), which are mentioned in every intro probability and statistics course, often with the rather apt because counter-intuitive case of why testing a low-risk population for AIDS is an awful idea. (The physicist’s solution of “extra information,” known in statistics as “increase the sample size,” is also commonly taught, though it doesn’t apply to AIDS testing I don’t think.)

This case brings up probably the most common and most compelling frame because within people’s everyday experience: “false positives” vs. “false negatives” in medical tests.

Granted, most people have never taken statistics, and I’m sure students who hated statistics are overrepresented among those who become journalists. Still, the frames are there if anyone cares to use them. (“If you think we’re good at profiling terrorists, consider this: would we want to use a pregnancy test that was right as much as 90 percent of the time and the other 10 percent was wrong /either way/? No? Then why trust such things regarding terrorism, when the stakes are even higher?”)

Author: Steven M. Teles

Steven Teles is a Visiting Fellow at the Yale Center for the Study of American Politics. He is the author of Whose Welfare? AFDC and Elite Politics (University Press of Kansas), and co-editor of Ethnicity, Social Mobility and Public Policy (Cambridge). He is currently completing a book on the evolution of the conservative legal movement, co-editing a book on conservatism and American Political Development, and beginning a project on integrating political analysis into policy analysis. He has also written journal articles and book chapters on international free market think tanks, normative issues in policy analysis, pensions and affirmative action policy in Britain, US-China policy and federalism. He has taught at Brandeis, Boston University, Holy Cross, and Hamilton colleges, and been a research fellow at Harvard, Princeton and the University of London.