Re: the Hopkins study of Iraqi deaths published in the Lancet:
1. No, I don’t know how much the overall Iraqi mortality rate has gone up since the invasion. Neither does anyone else. That suggest the need for less shouting and more measuring and calculating. (For a pretty good example of serious discussion, see the comment thread on this Crooked Timber post, and then compare it with the stuff coming out of Red Blogistan.
2. Jane Galt is entirely right to say that when a measurement or calculation generates an unbelievable result, it’s often wise not to believe it. That’s just Bayes’s Rule in action. What time is it when the clock strikes thirteen? Time to get the clock fixed.
3. But one also shouldn’t cling too strongly to prior beliefs when those beliefs aren’t strongly founded. That’s also Bayes’s Rule in action. And making the most careful estimate possible under the circumstances isn’t at all the same thing as “a wild-assed guess.” If lower estimates made with other methods cast doubt on the Lancet figure, by the same token it casts doubt on them. “This result has a large error band around it, and may be biased upward” is not the same as “This figure is worthless and should be ignored.”
4. Refusing to believe something because you’d feel terrible if it were true is not a good statistical method. The rage of the hawks against the authors of the study certainly stems in part from an unwillingness to contemplate the possibility that their pet adventure has cost something more than half a million lives. And their universal lack of expressed interest in finding out the true number tells very heavily against their sincerity.
5. The argument that 600,000 people couldn’t have died without there being more news stories depends on a claim about the accuracy of the newsgathering process in Iraq that doesn’t seem to be supported by evidence. Most of the killing seems to be Iraqi-on-Iraqi. More than half are by gunfire. How do we know that the incidence of individual homicide and small-scale massacre isn’t that high? If ten people were killed in each of a hundred villages one day, what reason is there to think that the newspapers would report a thousand-casualty day?
6. Fewer than a third of the excess deaths were from Coalition action. The rest are Iraqi-on-Iraqi. So comparisons with German civilian casualties in WWII are pointless.
7. Yes, the survey projected 600,000 excess deaths based on 547 actually reported deaths. That’s what “sampling” means, doofus. Every four years, pollsters in the U.S. project the results of voting by 100,000,000 people based on samples of 1000 or so, and get within a few percentage points.
8. The claim that 2.5% of Iraqis couldn’t have died without leaving visible depopulation is very week. Visible to whom? Certainly three or four times that number of people have left the country.
9. Incident counts under-report fatalities. So the fact that a population-based estimate comes up with a higher figure that adding up the incident counts is no surprise. Whether the discrepancy in this case is so large as to cast doubt on the population-based estimate is a question for someone expert in both sets of methods and on what’s actually happening in Iraq. The intersection of that set with the set of bloggers may be empty.
10. The paper claims that one team of four surveyors could survey a cluster of forty households in a day. That seems odd, and calls for some explanation.
11. The interviewers asked for death certificates, and mostly saw them. But the estimated number of fatalities is much larger than the total mortality figures compiled by Moqtada al-Sadr’s Ministry of Health. Either the sampling is off, or the interviewers were lying, or the families were showing phony death certificates, or the local officials who produce death certificates aren’t reporting them to the Ministry of Health, or the Ministry is failing to add them up right, either deliberately or not. Perhaps someone could go to the local authorities and ask them for their totals. But it wasn’t incumbent on the Hopkins folks to do so.
12. If the incident-based counts have been rising, that tells us something about the trend, even if the level of the incident-based counts is below the level of the survey-based estimate. So to say that when John Murtha cites those numbers he’s casting doubt on the Lancet report is intellectually dishonest even beyond the warblogger norm.
13. Estimating a confidence interval (“error band”) around a point estimate is a way of being honest about how much you know and don’t know. The argument “this study has a big error band, therefore it’s not reliable” (more or less what Medpundit says) betrays a quite astonishing level of either deception or ignorance.
As so often Nietzsche was there first:
“I have done that,” says memory. “I could not have done that,” says pride. Eventually — memory yields.
Daniel Davies puts the controversy in what seems to me the right context:
First, don’t concentrate on the number 600,000 (or 655,000, depending on where you read). This is a point estimate of the number of excess Iraqi deaths – it’s basically equal to the change in the death rate since the invasion, multiplied by the population of Iraq, multiplied by three-and-a-quarter years. Point estimates are almost never the important results of statistical studies and I wish the statistics profession would stop printing them as headlines.
The question that this study was set up to answer was: as a result of the invasion, have things got better or worse in Iraq? And if they have got worse, have they got a little bit worse or a lot worse. Point estimates are only interesting in so far as they demonstrate or dramatise the answer to this question…
And the results were shocking. In the 18 months before the invasion, the sample reported 82 deaths, two of them from violence. In the 39 months since the invasion, the sample households had seen 547 deaths, 300 of them from violence. The death rate expressed as deaths per 1,000 per year had gone up from 5.5 to 13.3.
Davies claims that the ratio between estimates of the number of dead from incident reports and the actual number is usually five or more. If that’s right, the discrepancy between the Iraq Body Count estimate and the Lancet estimate doesn’t look improbably large.
Scott Lemieux at Tapped has more. Lindsay Beyerstein provides an annotated index to warblogger nonsense.
15 thoughts on “Iraqi deaths, confidence intervals, and the state of denial”
Very nice over all, though I think you give 'Jane Galt' less of a spanking than her poor display deserves.
Good post, but I don't agree with this:
The paper claims that one team of four surveyors could survey a cluster of forty households in a day. That seems odd, and calls for some explanation.
Bear in mind that in the 1849 households visited there were 629 deaths. So for most households it's simply a matter of determining the number of people in the household in 2002, noting any births or other arrivals and any departures. So for those households maybe ten minutes work each? I'm not sure, but I'd guess that Iraqi doctors are used to working fast these days.
"Jane Galt is entirely right to say that when a measurement or calculation generates an unbelievable result, it's often wise not to believe it. That's just Bayes's Rule in action. What time is it when the clock strikes thirteen? Time to get the clock fixed."
When one has no freakin' clue as to what 'credible' or 'incredible' means, then rejecting a study due to the results being 'incredible' is not the proper thing. The proper thing is to review the methodology, and to get a clue (not expertise, but at least a clue) as to what should be 'credible'.
*As pointed out in the Lancet article*, civil wars have frequently resulted in death tolls in the few-several hundred thousands. So that's a strike against 600K being 'incredible'.
*As pointed out and referenced in the Lancet article*, media reports of deaths in civil wars generally start at 50% coverage, and head down from there. So that's strike 2.
I read Megan's article, and she doesn't come up with a good reason to disbelieve the survey results. She appeared on Ezra Klein's blog, produced no factual arguments which were stronger than we Kleenex, got spanked, and left.
That's strike 3 – continuing to disbelieve, with no factual arguments, after numerous factual arguments have been made in favor of disbelief.
Three strikes – Megan's out!
Frankly, anybody who's read her blog knows her schtick. She claims to be a quantitatively-skilled, libertarian, Chicago MBA person, but really she's just a lightly-skilled right-winger who's finally found her niche, writing propaganda. In the future, I expect to see her articles in the WSJ (editorial page, not the news part), and other propaganda outlets.
To me, the striking thing about the quality of debate is that the right starts with 'incredible', 'politically biased', followed by numerous highly innumerate arguments which make one hope really hard, that their presenters aren't in jobs where quanitative cluelessness would endanger anybody's safety.
Is it just my bias, or is it true that a total anti-science attitude, previously the specialty of the religious right, is creeping across the right in general? Is it the fact that this administration has been surprisingly successful is just flat-out lying?
Anyone who names herself after an Ayn Rand character has to bear the presumption of intellectual incompetence, to say nothing of her aesthetics.
P.S.–Great Nietzsche quote, one of my favs.
Anybody who, after these three-plus atrocious years, is supportive of the Iraq War and the civilian leadership that has pursued it has already proven that their preconceived notions are as resilient as they could possibly be. I would certainly not expect a mortality study in a medical journal to bring them to the light all of a sudden.
Is it just my bias, or is it true that a total anti-science attitude, previously the specialty of the religious right, is creeping across the right in general?
If it's a bias, it's one widely shared. I hope we're all biased and wrong about this, because the alternative is really scary. It seems as if, in the past, squabbles over methods of data reduction were confined to the specialists, while even well-informed layfolk would wait for the dust to settle before drawing conclusions. Now, with the hyper-participatory style that prevails on the Internets, no paper is safe from the attack of the doofuses.
"Frankly, anybody who's read her blog knows her schtick. She claims to be a quantitatively-skilled, libertarian, Chicago MBA person, but really she's just a lightly-skilled right-winger who's finally found her niche, writing propaganda. In the future, I expect to see her articles in the WSJ (editorial page, not the news part), and other propaganda outlets."
I find her blog to much more balanced and well-informed than the vast majority of the blogosphere or print media. She is now a deputy editor for The Economist, by the way.
I must admit I can't bring myself to read it, but based on her work in comments (e.g., the Klein thread noted above), I must say I mistrust your judgment. Are you sure you're not falling for her well-practiced "reasonable analyst" act? I find once you brush away her rhetoric, that her analysis ranges from shallow to innumerate.
As for the Economist, I remember when it was great (1980s), good (1990s) and that something happened to it such that it stopped being worth the not-inconsequential scratch it cost ~2000. If MM/Galt had anything to do with that, she deserves your ire, not your approbation.
Happily, I can report that the FT neatly takes the Economist's place, with even funnier writing that in the E's 1980s heyday and on salmon-colored paper to boot.
To give The Economist its due, they published a reasonably fair summary of the study, as they did with the earlier study in 2004 which they discussed in more depth. It probably helps that they have better statisticians than the average newspaper.
Ah, links dont's work? Here it is:
A very good post.
FWIW, I don't really disagree with anything you wrote here, including my intellectual dishonesty. However, in a frail attempt to erect a defense, I would say this: My point was not actually to suggest that Murtha had impeached The Lancet study so much as to suggest a snarky question that might be put to Murtha in the interest of eliciting a response that might have some political and/or entertainment value.
Well, you know, TH, it IS perfectly possible that the Lancet study is more accurate than Murtha while Murtha is still more accurate about the general Iraq situation than Bush. So let's get back to the accuracy of the Lancet report itself, hm? And in that connection, while you're quite correct that the Lancet's current editor has some screwball political views, if you can't find a serious procedural error in the study, your only possible remaining ground on which to question it is simply to say that its authors are a bunch of deliberate liars. As yet, I don't even see any grounds to regard its conclusions as implausible. (On this last subject, see Davies' entries on the accuracy of the Iraqi Ministry Health and of Iraq Body Count, and also Juan Cole: http://www.juancole.com/2006/10/655000-dead-in-ir… .)
Comments are closed.