Andrew Cohen at The Atlantic demonstrates that you can be a leading “legal analyst” without knowing what the word “random” means or the difference between probability estimation and prophecy.
Author: Mark Kleiman
Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out.
Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken)
When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist
Against Excess: Drug Policy for Results (Basic, 1993)
Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989)
View all posts by Mark Kleiman
19 thoughts on “Scientific illiteracy”
Wow. I guess Cohen has never heard of the Law of Large Numbers or the Central Limit Theorem.
Juries are no more difficult to predict than an assembly of a dozen helium atoms’ temperature or pressure. The problem is similar: the ensemble is too small for emergent properties like temperature or pressure to have meaning.
It is one of the puzzles of the universe that order can emerge from chaos, but it does.
On the other hand, the contemptuous political view commonly held here can lead to some howling mistakes. Recent example: On Monday, Chris Newfield tweeted:
@cnewf: holed up in Brooklyn and not yet impressed with the policy of the closing of everything. hard to imagine this lockdown pre-911
A _great_ example of how reality-based this community has become.
Ah yes, I’m sure we all remember the glut of polls of Supreme Court Justices in the run up to the ACA decision…
Still, it could be worse. At least he didn’t say something like:
“So should Mitt Romney win on Nov. 6, it’s difficult to see how people can continue to put faith in the predictions of someone who has never given that candidate anything higher than a 41 percent chance of winning (way back on June 2) and â€” one week from the election â€” gives him a one-in-four chance, even as the polls have him almost neck-and-neck with the incumbent.”
One thing he gets right: There is no predicting how much voter suppression, computer hacking and the whole pantheon of GOP dirty tricks will play into this election.
…and how much they’ll be offset by same by Democrats.
there is a ton of evidence of Republican practices of all of the above. Where is evidence of Democrats doing the same or similar?
Intercity political machines have their own methods.
such as? actual evidence please.
Jeez, all this time I thought Silver was doing statistical probabilities, not predictions. Did I miss something in Cohen’s piece about predictions?
But just think, for example, about how many of us — on all sides of the ideological spectrum — were dead wrong this June about the justices and the Affordable Care Act?
Hardly anyone was dead wrong. Most opinions were that the four liberal justices would vote to uphold the law, and that Scalia and Thomas, at least, would vote against. So that’s six out of nine right immediately. And almost as many thought Alito would be a negative as well. As for Kennedy and Roberts, the common prediction was that they would vote the same. So that’s half right. Overall, I’d say that those making predictions probably averaged over 80% correct on the Justices’ votes.
Guess why. Even though there was no polling, there was plenty of data about the Justices’ likely leanings on the matter, and that data turned out to be an accurate basis for prediction.
Really what was being done was nine predictions based on knowledge of their opinions, as you imply.
However, the assertion “As for Kennedy and Roberts, the common prediction was that they would vote the same,” is a prediction of interaction between individuals. The conventional wisdom was that Kennedy was most likely to be the swing vote and Roberts would want to write the majority opinion. This is a multi-part augument rather than a single verifiable hypothesis. That Kennedy was the deciding vote was not true [although reports suggest that initially it was true] and that Roberts wanted to write the majority opinion was true. The prediction that Kennedy and Roberts would vote the same is entirely false. The theme of the article, after all, was the understanding basic concepts of reasoning or lack thereof.
I don’t disagree with anything you say, but it’s not clear to me what point you are trying to make.
Question for RBC:
I have seen Nate’s website many times, but I do not know much about his models. Is there a site where one can see the variables in his model, which main effects he has, how many interaction terms, which link functions (probit/logit, etc.) he uses? This may be proprietary information, but it would be of great interest to see more about his methods. I sort of assume he is using some kind of maximum likelihood estimation, but really have no clue how he does what he does.
The only thing I get for sure is that he is not engaged in the predicting business.
I am currently reading Silver’s Book The Signal and the Noise. In the prefatory material he says he is going to discuss some of his modeling methods.
So far, it appears the Silver is mostly a poll aggregator. What makes him different from other aggregators like Real Clear Politics is that he weights the polls by their accuracy. There are some economic and demographic factors involved, but he hasn’t yet discussed those aspects.
The other thing to note about Silver’s forecasts is that he has more in common with the National Weather Service than Karl Rove (and other talking heads.) That is, he is providing something usable as a prior on election rather than a binary forecast.
His models are proprietary. From what I’ve heard – and I suspect he’s written a lot about this, I don’t pay close attention, he uses some factors other than polls (especially economic indicators), but mostly what he does is look at polls, especially state polls, weight them or adjust them according to various factors (track record of the polling firm, polling method, etcetera), and basically do a meta-analysis that attempts to get more statistical power from the larger sample size that results from combining many polls.
I believe that other people use similar approaches with models whose complete details are available.
For more information about the approach taken at 538 and at similar sites, you might check out the FAQ page at Sam Wang’s site; I suspect something similar exists at 538 but don’t know where.
Perhaps this will shed some light: http://fivethirtyeight.blogs.nytimes.com/methodology/
Echoes what Dennis said but in more detail.
Much obliged, Tim. Steps 6 and 7 shed light on where the probabilities come from; it appears to be a matter of estimating the uncertainty in the model and using the number of standard errors by which a candidate leads after the adjustments have been made. Very elegant. Very smart man.
I did see his book the other day; if I did not have such a stack of unread books ahead of it in the queue, it would be the next one on my list.
what is wrong with *you*
his blog piece is mostly dead on; he has a perfectly good understanding of random
you either didn’t atcually bother to read his piece, or are deliberatly distorting what he said
Ezra darling, look up Central Limit Theorem and random distributions. You might catch a clue
Comments are closed.