Knowledge is hard; is prejudice better?

Social-scientific knowledge is hard to come by; that doesn’t mean that “common-sense” prejudice embedded in existing institutions somehow incorporate superior wisdom.

Jim Manzi points out that social-scientific knowledge is hard to come by: regression analysis often isn’t supported by controlled trials, and that even controlled-trial results require replication. From this he concludes that:

We should be very skeptical of claims for the effectiveness of new, counterintuitive programs and policies, and we should be reluctant to trump the trial-and-error process of social evolution in matters of economics or social policy … we need to keep stumbling forward with trial-and-error learning as best we can.

I’m sorry, but this is incoherent. What is this magical “trial-and-error process” that does what scientific inquiry can’t do? On what basis are we to determine whether a given trial led to successful or unsuccessful results? Uncontrolled before-and-after analysis, with its vulnerability to regression toward the mean? And where is the mystical “social evolution” that somehow leads fit policies to survive while killing off the unfit?

Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present. Is that fact – the resultant of a complicated interplay of political, bureaucratic, and professional forces – to be accepted as evidence that mass incarceration is a good policy, and the “counter-intuitive” finding that, past a given point, expanding incarceration tends, on balance, to increase crime be ignored because it’s merely social science? Should the widespread belief, implemented in policy, that only formal treatment cures substance abuse cause us to ignore the evidence to the contrary provided by both naturalistic studies and the finding of the HOPE randomized controlled trial that consistent sanctions can reliably extinguish drug-using behavior even among chronic criminally-active substance abusers?

For some reason he doesn’t specify, Manzi regards negative trial results as dispositive evidence that social innovators are silly people who don’t understand “causal density.” So he accepts – as well he should – the “counter-intuitive” result that juvenile boot camps were a bad idea. But why are those negative results so much more impressive than the finding that raising offenders’ reading scores tends to reduce their future criminality?

Surely Manzi is right to call for metholological humility and catholicism; social knowledge does not begin and end with regressions and controlled trials. But the notion that prejudices embedded in policies reflect some sort of evolutionary result, and therefore deserve our respect when they conflict with the results of careful study, really can’t be taken seriously.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com

15 thoughts on “Knowledge is hard; is prejudice better?”

  1. I'm not sure if Mark and Manzi aren't talking past each other.

    I can read Manzi's quote as saying one of two things. It could mean "Burkean skepticism is inconsistent with social science." This is wrong, for all the reasons Mark says it is. But it could also mean: "We should have Burkean skepticism toward our social science learning." This is a reasonable statement, and I'm not sure contradicts anything Mark said. The prejudices of–say–a Bismarck are quite likely to be better than the social science of its day. Burke's prejudices were certainly better than the product of the philosphes. Not to mention Roosevelt's, which were far better than those of the cream of the economics profession of his day.

    Since I believe that Jim Manzi is an honest and intelligent man (albeit often a misguided one), I prefer the more charitable interpretation. (But I still don't understand the point he was trying to make with Capital One.)

  2. My interpretation of Manzi's conclusions is that, at best, bold "scientific" approaches to social science (and policy) research barely outperforms good old fashioned, long-term, trial-and-error social science/policy improvements (when they're actually improvements). Scientifically-orinted models (such as Rational Choice models/theories) can help us to better understand the hyper-complex modern world, but usually only a slice of it at a time, and we're still left with immense blind spots any time we devise a policy approach.

    The main difference is that social SCIENCE is power-hungry, tends to make outsized promises on which it cannot deliver. At its worst, it potentially leads to scientistic rationalizations for tragic ends-justify-the-means human engineering projects; but usually it just delivers benign but ineffective results at very high costs. The underlying point Manzi is suggesting (or hinting at) beyond the limitations of science/experimentation in the "softer" sciences and particularly social sciences is that we can carry out successful "Moon shot" projects when it comes to the big "harder" scientific challenges, like building an A-bomb, sending a man to the moon, Brooklyn Bridge, even medical breakthroughs, etc. But we can neither overcome the limitations Manzi described, nor can we accelerate human evolution, which from our perspective proceeds at a glacial pace.

    No, trial and error" process is not magical, as you suggest. Sorry to break it to the "Reality Based Community," but there are no magical solutions.

  3. Patrick, maybe you can make clear to me what non-scientific approach allows you to tell which innovations succeeded and which failed. The list of things known to "common sense" but scientifically demonstrated to be false is really impressively large.

  4. "Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs.

    The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates. "

    No, the missing ingredient is that social problems are exponentially more complex than hard science. The factors involved are enormous, the studies take forever, and even then the policy responses must be carried out correctly, all while scrambling for funding and dodging the politics of it all. But the fact remains that there are any number of wildly successful social programs that are evidence-based and driven by strong theoretical frameworks built upon decades of solid research. Social transformation is one of the most difficult problems to solve!

    Putting a man on the moon is child's play when compared with ending generational poverty, closing the achievement gap, gender equality in the workforce, or bringing meaningful reform to the criminal justice system.

  5. Mark, it's probably easier if we arbitrarily break it into two topics for sake of discussion . . .

    1. Policy/program planning, analysis, evaluation, testing: Manzi was not suggesting (and I wouldn't either) that we should attempt, in the year 2010, to try to approximate "non-scientific" approaches, in terms of analysis/evaluation, as if it were a matter of scientific v. non-scientific. Even if/when we understand the profound limitations of what social scientific research can generate, we benefit by including a variety of approaches and methods, including methods informed by the "harder" sciences and by approaching the analysis with certain mindsets, disciplines, etc., for which we are indebted to the pursuit of scientific truth.

    However, the more overzealous social SCIENTISTS will sometimes veer toward making scientistic claims about their models, which are supposed to be able to predict all sorts of things they cannot really predict. In reality, when the models failt to deliver on their outsized promises, we still have to grind things out the old-fashioned way: yes use models, but also conduct a range of quantiative and qualitative analyses, rely on analysts/policymakers intuitions, listen to feedback from stakeholders, place the policy analysis within historical context, etc.

    After all that, we'll still have serious doubts about what works/worked versus what does/didn't (assuming that we are keeping things in proper perspective), but that's the point: we need to maintain a healthy modesty about what policy analysis/evaluation can actually accomplish. Maybe then we'll avoid revamping policy instruments based on the outcomes of one allegedly innovative social "scientific" study that was anything but conclusive. Maybe we'll be more sensible about how the high-tech models fit into the overall overall analytical toolkit. And, most importantly, maybe we'll be a little less apt to engage in social engineering.

    2. Which brings us to the other topic. Rather than rehashing what I wrote in comment #2 above, I'll quickly reference Eli's comment #4 above. "Putting a man on the moon is child’s play when compared with ending generational poverty, closing the achievement gap, gender equality in the workforce, or bringing meaningful reform to the criminal justice system." I agree with Eli and yet I get the sense that Eli is not being ironic about it – i.e., he realizes that social engineering is far, far more difficult to pull off than would, say, sending a man to Mars (we already did the moon), he's still going to go ahead and try to close the achievement gap and create gender equity in the workplace, even if the subjects are not fully interested in cooperating in the experiment.

  6. Quick follow-up: in my points about the limitations of social scientific research, I ended up concentrating too much on models, whereas Manzi's article deals more with social scientific experiments.

    But the same principle applies: the dream that social scientific models and/or experiments could overcome, bypass, etc., the age-old limitations is not much closer to being realized than it was 20 – 30 years ago. By all means, let's keep trying to improve those models and experiments, and learn from these approaches, but with full appreciation that the human condition simply does not lend itself to short cuts or universalized theories.

  7. Mark,

    "Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present".

    Do you mean that in the 1970s and 1980s there were there was no empirical work following up on Becker's deterrence theory or on Avi-Itzhak and Shinnar's incapacitation model? Or that the empirical work on deterrence usually didn't find big negative elasticities, and the empirical work on incapacitation didn't find big effects? Maybe you are claiming that although such work did exist, and provide a social scientific argument for mass incarceration, reviews by the National Academies later came along and correctly found the work flawed. But that's not an objection to Manzi's argument. It is his argument.

    You also say that Manzi's belief in a "trial and error process" of social evolution is "incoherent." Hayek, Lindblom, and even Mill disagree. Maybe they're wrong, but it seems a little rich to say the thought "really can’t be taken seriously".

  8. Patrick, I think it should also be noted that social research and corresponding policy are relatively new phenomenons. The government has never been very interested in social engineering because A)it didn't think it was necessary and B)it didn't think it had the means. There was a dramatic shift half a century ago with the civil rights movement and a building accumulation of social research, and we've been making serious attempts at both ever since.

    However, there have been many failures along the way. Things were assumed that should not have been. Welfare and multi-story housing projects are just two that come to mind. The intention was good (equitable housing and alleviation of poverty), but the understandable reaction to racism and inequality blinded us to the reality that just giving people the financial means doesn't equate to giving them the social means.

    I'm reminded of an important study that was done – with great society funding – that looked at vocabulary and cognitive development among various SES groups (Hart & Risley). It was prepared in the 70's, took place in the 80's, and finally published in the 90's. What it showed was how powerfully the parenting effect was in early childhood, with very specific data. Throughout this time, Head Start was in operation – starting I think in 1965. Interestingly though, there was little good data on child cognition and language development in relation to parent SES. So you had policy and research developing in tandem.

    Today, there are programs that take advantage of this and other research, very evidence-based, that show incredible results. Yet still, as a matter of public awareness, we seem to be in the dark ages with regard to human, and certainly child development. This severely hamstrings social policy. Many people, unaware of the research and going by either ideology or common-sense, just sort of make-up answers to social questions.

    Sometimes trial-and-error is all you can do, but it seems more often than not that what we're really dealing with is ideological opposition disguised as skepticism. There are just too many gut-level emotional responses based in (often unconscious) philosophical assumptions that have never been anywhere near scientific data, and yet are "clung" to with unmatched fervor.

  9. Eli, excellent comments, even if we're coming from different perspectives. One quick point/question in response to your last paragraph:

    Even if assume that ideological opposition and/or gut-level emotional responses effectively stiffle a large number/proportion and variety of potential policy innovations generated from social scientific research, wouldn't we expect to see a certain number (albeit relatively low) of model-driven or experimentally-based innovations that manage to break through the filters and, by virtue of being truly transcendental solutions to seemingly intractable problems, would ultimately rise to the top? Perhaps you have an example or two to cite. Manzi had a tough time finding ANY good examples (he claims to have found one halfway decent example in criminology).

    In a social democracy, the temptation to heed the siren's call often leads to counterproductive and costly interventions. These interventions tend to be relatively benign; they are almost always well-intentioned (although they end up creating, layer on top of layer, spider webs of unintended consequences); they help us to better understand the world even as they fail; and the general public usually end up demanding the policy instruments generated from these interventions, especially once they're already in place. However, history is replete with examples of when the siren's temptation led all the way to a violent crashing against the rocks. I don't agree with you that the American system has been slow to get engaged in social engineering – although maybe so in comparison to other social democracies – but, to torture another metpahor, I also don't wish to throw away the baby with the bathwater.

    I'd never suggest that social scientists should aim low or stop trying to make the world a better place. However, I do think that a very serious, deep reflection on why it would be vastly more feasible to send a man to Mars than it would be to end homelessness in the U.S. is long overdue, and not because we should launch a Mars project (we probably shouldn't). Sure, every 10 years or so, social scientists revive this discussion for about as long as it takes to drink a cup of coffee, and most of them even say many of the right things, but then it's back to business.

  10. Patrick,

    I don't know who you are attacking. If you are attacking social scientists for being over-prone to what Oakeshott called "rationalism," I'm with you.

    But it's easy to catch a snail, and not very significant. The big game is actual policy, as implemented by folk with real-world power. You see a lot less rationalism here. And, fwiw, most of the rationalism of the last 30 years has been on the Right: i.e., Mister Market knows all.

    Ely,

    Social engineering on a grand scale has been around since the birth of the Republic. The Bank of the United States. Canal privileges. Incarceration as a punishment. Universal male suffrage. Immigration policy. Public education. Railroad subventions (including grants of eminent domain powers to private corporations.) Slavery, and anti-slavery. Laissez-faire. The Progressive achievements: the FTC, the Federal Reserve, and railroad regulation. (Not to mention whatever state-level Progressivism that survived Lochner.) The women's vote. Prohibition. Some successes; some failures; some mixed results.

    And this is all pre-New Deal stuff: mostly pre-Sixteenth Amendment. All of it was influenced by the social science of the day; some was even driven by it.

  11. Social science has discovered a lot. Unfortunately, it mostly turns out to be knowledge that most social scientists didn't care to know, such as:

    – IQ matters

    – Race matters

    – Sex matters

    – Class matters

    – Discipline matters

    – Genes matter

    and so forth and so on.

Comments are closed.