What would happen if we started to hold corrections officials — prison wardens and the managers of probation agencies — accountable for the proportion of their alumni or clients who commit new crimes? I speculate on that question in the latest issue of Blueprint. The piece refers to Dukenfield’s Law; more about that here


Sasha Volokh tends to agree, though with more stress on the virtues of privatization. His article lays out the requirements of a good accountability system, but without directly addressing the question of how to get one influenced in the face of contractors’ political power. (He does point out that public-employee unions — in this case the correctional officers — can pose a comparably serious problem.)

Now that we have some Very Rich People apparently bound for prison, I wonder if anyone has addressed the question of what to do when an inmate (or his friends and relations) acquires a big block of stock in the prison company? It’s not far-fetched, when you consider that a major motivation for membership on hospital boards of directors is reported to be the special treatment board membership ensures.

End of the Drug War?


William Burton posts a moving account of his father’s struggle with alcohol, and concludes that, as awful as alcoholism was for his father and the family, it would have been much worse had alcohol been illegal and expensive. From this no doubt correct premise, Burton draws the (I think) invalid inference that the drug laws should be repealed. “It’s time to treat drug abuse like what it is, a personal and family tragedy that isn’t a criminal justice issue.”

The missing premise in that inference is that everyone who would be addicted to (e.g.) cocaine, if cocaine were legally available, is addicted (to cocaine or something else) now. But that seems hardly plausible. The United States has perhaps two million heavy cocaine users, and perhaps 15 million people with drinking problems. There’s no reason to think that alcohol is either more fun than cocaine or more addictive. That suggests to me that making a drug legally available, or at least making it commercially available, is a population-level risk factor for addiction.

The task of reducing the damage done by our current drug laws and related policies requires detailed analysis on a drug-by-drug, policy-by-policy basis. The slogan “End the drug war” is no more likely to be a useful guide to action than the slogan “A drug-free society.”

Here’s a brief general statement of principles that might guide more sensible drug policies.


William Burton provides a lengthy, thoughtful response, filling in the two key premises: that the overall level of addiction is more or less fixed, with laws and other social conditions simply determining which drugs will be abused, and that the external costs of prohibition are much larger than the external costs of drug abuse itself. Granted these premises, the conclusion that the drug laws should be repealed follows; Burton proposes a non-commercial “state store” model as opposed to commercialization, which would tend to reduce the damage somewhat.

I don’t see any strong evidence in favor of the “natural level of addiction” theory. Moreover, while substitution among drugs is an important fact that tends to reduce the value of drug controls, complementarity is an equally important fact. Increased availability of cocaine would tend to increase the rate of alcohol abuse. As to external costs, alcohol is involved in about half the homicides and about a third of the highway fatalities in this country. And its legality doesn’t even keep the criminal justice system out of the problem: ignoring crimes committed under the influence, violations of the alcohol laws — mostly drunken driving and drunk and disorderly — account for many more arrests, though not as much prison time, as violations of all the controlled substances laws combined.


My note on the ethics of debating foreign policy drew lots of comment (e.g., from Max Sawicky), but every time I got ready to reply some new event intervened. First the Iraqis said they’d let the inspectors in, which made the strategy of threatening war look good. Then the Bushies rejected the offer out of hand, which made it look as if they were more interested in going to war than in using the threat of war to strip Iraq of its weapons of mass destruction and its capacity to produce more.

Then the Iraqis “clarified” their position almost to death, for example by claiming an exemption for SH’s presidential compounds, which apparently means a lot more than the equivalent of the White House plus Camp David. On the one hand, that tended to confirm the wisdom of the initial Administration reaction to the offer; on the other, it suggested that that reaction might have been a diplomatic mistake, since it allowed Iraq to deflect condemnation for its weaseling with the reasonable argument that the Bushies’ bloodlust was so intense that war was inevitable whatever Iraq did or didn’t agree to. (If there’s going to be a war anyway, why should Iraq open itself to an inspections effort with clear tactical intelligence utility to any invader? There seems to be no doubt that US military intelligence made what it could of the opportunities offered by the inspections process in the 1990s.)

The alternative would have been a statement that said, “We’re delighted Iraq says it’s finally prepared to do the right thing, but since its leadership is a bunch of damned liars we only half believe it. In order to satisfy our need to ensure that the SH regime is not accumulating WMD’s, any inspections program must include [X, Y, Z]. Secretary Powell has been on the phone with the Secretary-General and asked him to determine, by next week, whether the UN is prepared to execute, and Iraq is prepared to accept, an inspections program on those terms, which would preclude the need for military action.”

Preclude the need for military action: ay, there’s the rub. Just as many in the peace camp seem willing to accept a nuclear-armed Iraq if the only alternative is war, the true warhawks seem determined to have their war even if Iraq’s WMD capacity could be eliminated without it. Maybe that’s not right, but if it’s not I’d like to hear the President say so in so many words, and say why. Even if explicitness on this point is too much to ask for from official sources, I’d be very interested in a clear statement from one or more of the warbloggers on this point. [Glenn? Eugene?]

No great fan of “international law” in the absence of international cops, I’m fairly comfortable morally (my operational concerns are a different problem) with a pre-emptive strike to keep a regime with a track record of aggression from getting The Bomb or its biological equivalent. [I’ve never been persuaded that poison gas belongs in the same category.]

I’m even comfortable with “regime change” as an objective, when the regime is awful enough, when there’s popular support in the target country, and when the change promises to come relatively cheap in blood and treasure; at least, that’s what I thought about Haiti and Cambodia and South Africa, and still think about Burma (why say “Myanmar”?). But I can’t really see that Iraq’s aggression of a decade ago, plus its truly awful domestic record, would justify a bloody war to force a change of rulers IF (a huge if) we could be assured that he wasn’t getting ready to nuke Tel Aviv or hit New York with Ebola virus.

I’m not going to guess about the domestic politics of this: at the moment, the Bushies seem to be winning, and that has to be the safest forecast about the outcome. But this CBS News Poll suggests that the population at large may be a little less war-happy, and a little more inclined to respect the UN, than the Administration, or the Washington Post. (Oddly, it doesn’t ask the question, “If a Senator from your state were to vote for a less sweeping grant of war powers than President Bush has requested, would that make you more likely to vote for that Senator, less likely to vote for that Senator, or have no effect either way?) The fact that Gore, a Gulf War hawk, just came down fairly clearly on the dove side may or may not mean anything, either as an indicator or in terms of whatever influence he might still have. But what’s Jack Kemp doing in the anti-war camp? (Maybe somebody finally told him that the military was part of the government.) And three retired four-star generals, including Clark and Shalikasvili? Still, the polls are starting to look somewhat better for Bush and lousy for the Congressional Democrats, and the Iowa market agrees.

Whatever the domestic shake-out, it does seem to me that the perception that the Administration is insisting on war, rather than being dragged into it by Iraqi intransigence, has to be a problem as it tries to gather support internationally.

[Footnote: Bush & Co. just openly — and, as it turned out, unsuccessfully — tried to intervene in Germany’s elections. After that didn’t work, they publicly insulted the newly re-elected German government. Don’t I recall a Presidential candidate talking about the value of humility in international relations?]

Michael Walzer has some penetrating things to say in the New Republic. (Not nearly as much fun as listening to him talk about Hobbes, but way above the general run of commentary.) He’s prepared for war, but only as a last resort, and harshly critical of both the Bushies’ enthusiasm for invading and of the European (especially Franco-Russian) unwillingness to push real inspections, both in the 1990s and now. He concludes:

So we may yet face the hardest political question: What ought to be done when what ought to be done is not going to be done? But we shouldn’t be too quick to answer that question. If the dithering and delay go on and on–if the inspectors don’t return or if they return but can’t work effectively; if the threat of enforcement is not made credible; and if our allies are unwilling to act–then many of us will probably end up, very reluctantly, supporting the war the Bush administration seems so eager to fight. Right now, however, there are other things to do, and there is still time to do them. The administration’s war is neither just nor necessary.

Paying for pharmaceutical develpmentT

Brad DeLong wonders, reasonably, why people hate drug companies so much. He cites a Wall Street Journal/NBC News poll showing the industry with a favorable/unfavorable ratio of 21%/54%, which probably puts it in a tie with Gary Condit. After all, despite the companies’ marketing shenanigans and apparently excessive rates of return, their products do in fact prevent many early deaths (mine, for example, when I had Hodgkins lymphoma a couple of years ago) and improve the quality of many lives.

The root of the problem is, I submit, pricing. Pharmaceuticals are priced in ways that bear no relationship to their marginal cost: the actual expense of making one more dose, once the drug has been developed and approved. The public is aware of that, aware of the way in which the pharmaceutical industry tries to twist the political process in support of above-marginal-cost pricing, and not conversant with, or not convinced by, the economic analysis of nonrival-consumption-goods pricing..

How to pay for creating things that are, once created, very cheap to make in large numbers is a widespread problem. Given its growing importance, it probably deserves more attention than the standard intro micro text gives it. If the goods are priced way above marginal cost to allow a recapture of the initial investment, the result is that some consumers will choose to, or be forced to, forgo them. That forgone benefit represents sheer deadweight loss, or what non-economists call “waste.” That’s as true of a song or a software program or a microprocessor as it is of an AIDS drug, but the problem gets more exciting when the forgone benefit is staying alive.

Owners of intellectual property rights like to say that unauthorized use of their property is theft, just like shoplifting a steak from a grocery store. But of course that isn’t exactly right; the stolen steak can’t be sold to another customer, and costs the store something to replace. That’s “rival consumption” in micro-speak. But listening to a song is “non-rival”: I can listen as often as I like and leave no less song for you to hear. Thus the “theft” of a song by file transfer costs the record company only the opportunity to extract money from that consumer at that moment, and if the consumer is one who wouldn’t have paid the asking price then the company doesn’t actually lose anything: it’s no worse off than if Napster had never existed, while the pirate consumer is better off. [Glenn Reynolds offers Brian Brigg’s funny analogy-by-parody.] You don’t have to be able to define the term “Pareto principle” to have a hard time understanding what’s wrong with an act that makes someone better off and no one worse off.

Some of the practices Big Pharma engages in to protect its revenue stream make it look especially bad: trying to buy special legislation to extend individual drug patents, as in the notorious instance of Claritin; fighting to slow down FDA approval of generics; opposing the cross-border trade that would defeat the “market segmentation” strategy that loads most of the drug-development cost on American consumers. (Jagdish Bhagwati makes a convincing argument that, in this instance, what hurts US consumers helps developing-country consumers. But imports from Canada are another matter.)

[Footnote: All right-thinking people are horrified by the very large proportion of US Gross Domestic Product that goes for health care. But much of that expenditure goes to pay for the costs of the very rapid rate of biomedical progress, not just in pharmaceuticals, but in medical devices, imaging technology, and health-care-delivery techniques as well. That progress benefits both future generations and the rest of the world. So a careful accounting would chalk only part of that expenditure to current consumption; some of it is investment, and some of it a kind of foreign aid. Arguably, there’s nothing wrong with the richest country in the world providing, and paying for, much if not most of the world’s biomedical progress, but there’s no obvious reason the costs involved ought to be borne by sick people and employers who provide health insurance, rather than by taxpayers generally. Oh, sorry, I forgot: Taxes BAD. Profits GOOD. Right.]

There’s no very good solution to the problem of how to pay for the production of non-rival-consumption goods. The one we’ve chosen in the pharmaceutical instance — private development under patent protection — creates great production incentives but generates enormous deadweight loss, and real inequity for those unlucky enough to be both expensively sick and poorly insured. It also contributes to the health insurance death spiral, as rising premiums drive more and more people to be uninsured, which means, in effect, subsidized by the shrinking pool of people who are insured, thus driving premiums yet higher. Price controls on drugs, or reductions in patent protection, or facilitation of “arbitrage” from the countries where the drugs are sold cheap to the U.S., where they are sold dear, would reduce the deadweight loss at any given moment, but at some cost in reduced incentive for drug development.

[ANOTHER FOOTNOTE: If you’re curious about why cannabis has never been developed into a pharmaceutical, the prejudice against it, both as an abusable drug and as a crude plant material rather than a single chemical, constitutes only part of the explanation; the fact that no drug company could get patent protection for it, and therefore no drug company has any incentive to absorb the expense of sending it through the FDA wringer, is the other, and perhaps larger, part.]

One alternative to patent protection is public provision. The National Institutes of Health already pays for a considerable amount of R&D, which the drug companies get to appropriate in the drug-development process. There’s no reason in principle why NIH shouldn’t do its own drug development, hire its own consultants and lawyers to lobby its drugs through the FDA approval process, and then either produce them in-house or (more plausibly) grant non-exclusive licenses to manufacturers to produce and distribute the drugs, trusting that competition will drive prices down close to production costs.

Another approach, discussed in this Cato publicaion by Glennerster and Kremer, is to offer cash prizes for the development of (or even for particular milestones in the development of) drugs for specific uses: set out a set of criteria for, say, an HIV vaccine, and a process for deciding when the criteria have been met, and let institutions of all kinds compete to meet them. [Michael Gluck reviews a range of alternatives, not including prizes, here.]

The most famous use of a prize was by the British Admirality, which offered in 1713 (and finally paid in 1773) £20,000 for the development of method for determining a ship’s longitude. (That doesn’t sound like a huge sum, but in purchasing-power terms it would be a few million dollars in today’s money, and that was back when real incomes, as well as nominal prices, were a lot lower; £20,000 would probably have been close to the highest income earned by any individual in Britain in the year when the prize was offered.) It’s not hard to think of problems with such a scheme, starting with the fact that the reward would be for being first, putting too much emphasis on the time factor and not enough on other characteristics of the drug invented. The underling difficulty is that neither the criterion selection process nor the prize-giving process would be a perfect substitute for evidence of actual medical utility. (Maybe the prize should be stated in terms of a subsidy payment per dose prescribed, though it’s easy to imagine problems with that, too.)

But the fact that there’s no perfect system doesn’t mean that the one we now have in place is the least bad that could be developed. And of course there’s no reason to have only one system running; private development under patent protection, public development with nonexlusive licensure, and prize competitions could exist side-by-side. But as long as drug companies charge thousands, or even tens of thousands, of dollars for drugs that cost tens or hundreds of dollars to actually manufacture, people are going to stay mad at them.


This note generated two very generous compliments, one from Matthew Yglesias, who liked the substance, and one from Demosthenes, who especially liked the concept of finding the least bad alternative.

There were also two substantive comments from experts (as I perhaps should have made clear, I claim no more than advanced amateur status on health care policy):

John Donahue of Stanford writes:

First, my understanding is that Canada uses its monopsony power as the single payer for drugs to drive down the prices of Canadian drugs to only a fraction of the American cost. Of course, this means that if Canada is paying less than its fair share of the fixed costs of drug development, then someone else is paying more than their fair share. If there is deadweight loss here perhaps the US and Canada and other countries can get together and eliminate it (or the US can start exercising monopsony power of its own). Second, if a really important drug comes along the US could always exercise the right of eminent domain to take the patent while paying fair compensation and then distribute it at marginal cost.

[For those who don’t speak Economese, “monopsony power” is the ability of a single buyer to stick it to the sellers: the opposite of monopoly. Canada is a monopsonist because it has a national health system, so there’s only one buyer for drugs in the entire country. The US has less power, because there are many payers, but the Federal government, and federally-subsidized state programs, account for a large enough share of pharmaceutical demand to have considerable market power as well. (Technically, that’s “oligopsony.”)]

UPDATE A reader corrects this as regards Canada, which does not have a single-payer for drugs.

Those are both important points, but I think John neglects what might politely be called the “political economy” of the problem. Yes, the US government could do either of those things; whether those would be good things to do is a matter for debate, and probably depends on lots of other issues and policies.

[The government could also try to acquire patent rights by negotiation, rather than by eminent domain; in situations where there are several nearly equivalent drugs, that could turn into some real high-stakes poker, with each company worried that one of its competitors will take the offer. But the fact that this would be a repeated Prisoner’s Dilemma, with a relatively narrow cast of players, suggests that the companies would probably find ways of colluding to reject the offers without actually exposing their executives to criminal anti-trust charges.]

But the government is hugely unlikely to do either of the things John suggests, or any of the things I suggested, for reasons of politics rather than policy. Big Pharma is a huge campaign contributor, on both sides of the aisle. It’s an election year, and everyone hates the pharmaceutical companies, and yet the House Republicans have bottled up the McCain-Schumer generics bill, which the CBO estimates would save consumers about $6 billion per year, and which passed the Senate 78-21.

(Note that no one who has to deal with real policy issues takes at all seriously the hot debate among campaign finance “reform” advocates and opponents over whether corporate and corporate-influenced contributions actually buy results: of course they do, unless there’s equally big money on the other side or intense popular interest over some specific issue.) .

David Boyum, who also knows a lot more about health care policy than I do, points out that things are more complicated than my post suggested:

With pharmaceuticals, you can’t identify market failure by merely observing a gap between marginal cost and price. Why? 1) health insurance reduces the marginal cost of drugs to many consumers; and 2) from a medical and/or welfare perspective, many drugs are overconsumed, despite prices that exceed marginal cost. (This is especially obvious with antibiotics, where the externality of drug resistance knocks your nonrival argument flat on its face. Do you really think it would be better if Zithromax were sold at marginal cost?)

Bottom line: you cannot conduct your analysis on purely theoretical grounds; you need some serious cost-benefit data.

I’m generally more skeptical than you are regarding government efforts to correct market failures, so the ideas of government prizes for drug development or making NIH a pharmaceutical company strike me as foolhardy at best.

My first-order analysis wouldn’t focus on the alleged deadweight loss indicated by the gap between price and marginal cost. Instead, I’d think about getting the right level of investment and consumption across different classes of drugs. Do we currently overinvest in various classes of drugs, or underinvest? Do we overconsume or underconsume? Once those questions are answered, then you can think about how to play around with government-funded research, patents, FDA regulation (including import policy), concentrated purchasing power by government and private insurance, and so on.

All of which seems right, though of course things are even MORE complex than that: some consumers face reduced prices due to insurance, some don’t, and some insured consumers find that particular drugs aren’t covered, or are covered only after several layers of medical review, because the insurance companies are trying to avoid overprescribing or are in tugs-of-war with the drug companies over pricing. And whether the prices of antibiotics are really a barrier to their overprescription has to be an open question; the place to address that problem may be in the medical schools rather than the co-payment schedules.

That the current system has overconsumption as well as underconsumption complicates the problem, but it’s not as if the two somehow balance out: the fact that primary-care docs routinely prescribe antibiotics for viral infections doesn’t do a thing for the cancer patient whose insurer won’t pay (thousands of dollars) for the blood growth factors that speed recovery from the side-effects of chemotherapy.

All that said, David (as usual) poses the right question, which is always half the battle: what would the socially optimal pattern of drug investment and drug consumption look like, and what set of policies would, dynamically, bring us closest to that optimum?

Wouldn’t it be nice if we had a political system capable of working through the answers to such questions, instead of one where money talks and analysis walks?



Tom McGuire (the MinuteMan) has more on the Central Park story, arguing that the five men were almost certainly guilty of aggravated assault and riot. My general points about cases of actual innocence (in my earlier post on this topic) stand, but if Tom is right — and he’s certainly thorough and convincing — they probably don’t apply to this case. The good news is that the five have completed their sentences, so, whatever the underlying facts, at least we don’t have innocent people behind bars while the system spins its wheels.

McGuire also points out that the case illustrates the willingness of some people to leap to bad conclusions about cops and prosecutors, which he reasonably compares to the willingness of some cops and prosecutors to leap to bad conclusions about young black and Latino men. He then brings in the Noelle Bush case, citing a blogger who, he says, is willing to convict her of cocaine possession on no more than newspaper accounts. “So much,” quoth the MinuteMan, “for innocent until proven guilty.”

He’s certainly right that leaping to bad conclusions about other people is a human tendency we would all do well to beware of. But I think he’s wrong, though not unconventional, in moving the presumption of innocence from its proper application to criminal trials to the broader context of public debate. The presumption of innocence, like the rules of evidence, is a procedural safeguard against the overuse of the awesome power of the state to punish. It means that, when a person is accused of a crime, the state must carry a very heavy burden of proof before that person can be adjudged guilty, and therefore punishable. In particular, no one can simply be accused and then forced to prove that he is innocent. That same principle ought reasonably to apply, though with diminished force, to other circumstances where an institution wields great power over an individual: in the employment context, for example.

After the Sam Shepard case, where relentlessly slanted newspaper coverage helped convict a probably innocent man of murder, the news media adoped practices amounting to a presumption of innocence in criminal cases: in general, respectable outlets will not say that someone is guilty unless and until there is a guilty plea or a conviction; until then, the defendant is “alleged” to have done whatever he is charged with doing. Sometimes that can be taken to fairly funny extremes, as in today’s AP headline about a woman caught on videotape beating her 4-year-old daughter: “Mother in alleged videotape beating turns herself in.” To call the incident an “alleged battery” would be correct; legally, she’s presumed innocent of the crime of battery until she pleads guilty to it or a jury finds her guilty of it. But “alleged videotape beating”? That the videotape exists, and shows a beating, are matters of fact, not allegation. Only the legal conclusion that the beating constituted an unlawful battery remains in suspense.

Saying “Noelle Bush was caught using cocaine last week” in a blog, or even a newspaper, does not inflict on her, or expose her to, any punishment whatever, though it does make people think worse of her, which is no doubt an injury. It would be charitable to hope that she is innocent of the charge, but it’s certainly not a logical presumption, and there’s no particular reason it should be a procedural presumption outside its criminal-law context. After all, we form bad opinions about other people all the time on matters that aren’t criminal at all; is it only those who break the law who should be presumed, in ordinary discourse, not to have done something bad? Bill Clinton was never tried for his activities with Monica; does that mean I have to presume, or pretend to presume, that the blue dress was stained with milk?

One of the silliest of the anti-war arguments is that we don’t have “proof” that SH is making weapons of mass destruction. Well, we surely don’t, to a criminal-law standard of proof, but so what? It seems to me the relevant legal concept isn’t the presumption of innocence or its correlative proof beyond reasonable doubt, but “probable cause,” defined as information that would lead a person of ordinary prudence to take action.

The other complaint that has been heard in in the Noelle Bush case (and cases involving her cousins) is that talking about it is an invasion of privacy. Granted, that she isn’t a public figure, and hasn’t (unlike a movie star) virtually asked to be gossiped about. That makes gossiping about her wrong, and the more public, the more wrong. If she were suffering from MS or depression, or had just broken up with her boyfriend, everyone else ought to shut up about it unless she wanted to talk.

But Jeb Bush is a public figure, with a set of public positions. It’s not unfair to ask whether he’s prepared to have the policies he advocates in general applied close to home, any more than it was unfair to criticize Bill Clinton, a strong advocate of public education and opponent of vouchers, for sending his own daughter to a private school. Jeb is strong on law and order. Jeb signed a bill tightening penalties for drug possession, a bill under which people who act as his daughter has apparently acted go to prison. Jeb has cut funding for publicly paid drug treatment. Does he think his daughter should go to prison? Does he want his daughter to go to prison? Does he think her substance abuse disorder should go untreated? And if he does think that his daughter should not go to prison and should receive treatment, why is someone else’s daughter different? Would he now like to reconsider that law he signed and those budget decisions he made? All those are fair questions, though of course painful for him to answer.

There’s also evidence that the treatment program where Noelle Bush was living was cutting her special slack — the call to the police came from another client, who said that this was Noelle’s fifth incident of cocaine possession — and that the administration there engaged in something that looks very much like obstruction of justice in an attempt to keep the police from being able to make a case against her. (The program is now claiming an obligation under Federal privacy law not to cooperate with the police.) That sort of stuff happens a lot when your father is Governor. Even if your father isn’t Governor, some of it happens if you’re rich (and white). And those facts, surely, are legitimate matters for public discourse.


Apparently Michael McConnell’s nomination as a federal appellate judge will be confirmed. All my law professor friends seem to think that’s the right outcome, and McConnell’s early-stated opposition to the result in Bush v. Gore suggests to me that he can’t be all bad. I think it’s both good morals and good politics for the Senate Democrats to be selective in trashing — oops! I mean considering — Bush judicial nominees. [Look here for some cross-talk on this from the liberal perspective.]

The partisan slanging match over judicial confirmations shows no sign of abating. The Republicans, having routinely denied Clinton nominees even the courtesty of hearings, are outraged — outraged! — that the Democrats are holding hearings, voting nominations down in committee, and then not sending them to the floor where Zell Miller can turn his coat and move them through. As I say, I think the McConnell confirmation will be the right result. But I also think that it ought to put to rest some of the more hysterical complaints about the Senate Democrats’ fairness. Because McConnell had opened himself up to hostile fire, if anyone had been in the mood for it.

Consider, for example, an article McConnell wrote for First Things. Depending on your view, it will appear either as a carefully nuanced, or as a disgracefully wishy-washy, account of an act of judicial lawlessness. I can see it in both lights:

Breaking the Law, Bending the Law


Eugene Volokh has three fascinating posts, starting here, on the “nucular” question, which he expands into a discussion of prescriptive vs. descriptive theories of usage. It’s all worth reading.

Several further reflections:

1. As Andy Sabl and others have pointed out, pronunciation is less standardized and more regional than diction or grammar, and the need for standardization is less. Indeed, arguably the language would be poorer without its regional accents (except for the Baltimore accent I grew up hearing, which has to die.) The pronunciation of “nucular” could be considered as reflecting a regional (Southern/Midwestern) accent, like “crick” for “creek.” That makes my comparison with Ebonics less precise. Moreover, as a practical matter, even an African-American accent, without any other “Ebonic” variants in usage, is more disadvantageous than a Southern accent. Just ax anybody.

Still, if “nucular” is appropriately thought of as a regionalism, then my criticism is to a considerable extent misplaced.

2. The fact that Merriam-Webster sold out to the descriptivists does not, pace Eugene, make prescriptivism obsolete. The dictionary-makers were never the decision-makers, merely the vote-counters.

The decision about what counts as standard usage in a language is made by the people who (are considred by others to) write and speak that language well. Even if all the good writers and speakers believed descriptivist dogma, their actual practice would set the standard for the language. If descriptivism made their practice more demotic, then the standards would loosen, and this could extend to the point of virtual non-existence, as was the case for orthography in much of the seventeenth and eighteenth centuries. But it’s hard to imagine that case arising for grammar or diction.

Turning around Eugene’s claim that descriptivist lexicography has made prescriptivism self-contradictory, I would argue that it’s prescriptivism isn’t really a practical option at all. Since choice is always a matter of prescription rather than description, it’s not possible to write, or edit, on a purely descriptivist basis. The rate at which a frank error becomes a new usage varies (think about “I could care less”), and not everyone will agree about when that transition has been made, but as long as something counts as an error (using “area” when you actually mean “volume,” for example, or “speed” when you mean “velocity,” or “curtain” when you mean “window blind”) then prescription lives.

Perhaps the real dispute involves the right basis for prescription. The “descriptivists” prefer a “democratic” basis, where simple frequency conveys normality; the “prescriptivists” prefer an “aristocratic” or “elitist” basis, where what is normal is defined by the “best” writers and speakers. Color me elitist.

3. Some of the edginess around this question comes from the link between “nucular” and the use of “nuke” as a verb. I bet there’s a correlation, even controlling for region, between people with “Nuke the Whales” bumper stickers and people who say “nucular.”


Kevin Drum, the CalPundit, thinks the war on terror is likely to be just as futile as the war on drugs. Peter Reuter, John Caulkins, and I, all of whom think about drug policy professionally, took a look at that analogy a few months ago and weren’t convinced. In particular, drug dealers have customers, and are therefore likely to be replaced when enforcement puts them out of action; the funding mechanisms for terrorism are different, and replacement not nearly so automatic. The good news about 9-11 was that people in the know were saying “Al-Qaeda” that morning, before any specific facts were available; apparently there isn’t a second group that the experts thought of as willing and able to carry out such an operation, and it’s not at all obvious that, if we were able to really dismantle Al-Qaeda, a replacement would spring up quickly.

None of that means that we know how to fight terrorism, only that one analogy suggesting that we probably can’t has less force on close inspection than it appeared to have at first glance.


Kevin has more to say on the subject.