The IRB horror show

Given a science-friendly administration, it might not be impossible to reform the Institutional Review Board process, which poses a substantial risk to freedom of inquiry.

My query about Institutional Review Boards drew so much mail that I am only now getting around to posting the results. It seems clear that concern about the abuse of the IRB process and about the unnecessary burdens it puts on research that poses no risks to its subjects has gone from an idiosyncrasy to conventional wisdom. That doesn’t mean anything will change (the chief IRB-wallah at HHS has announced that he doesn’t think his job is unconstitutional and that removing minimal-risk research from review is “not going to happen”) but it’s a start.

It turns out that there’s a blog devoted to the issue: Institutional Review Blog, kept by Zachary Schrag of George Mason. A quick glance shows it to be balanced and scholarly, poking relentless fun at dim-witted policies such as UCLA’s while offering praise for sensible policies such as those of the University of Missouri-Kansas City.

Another blog, IRB Watch, has a helpful list of articles and reference sites, but is mostly dedicated to collecting horror stories, including a truly hair-raising account of the IRB travails of Elizabeth Loftus and Mel Guyer, the original debunkers of “recovered memory” evidence.

This law review article by Philip Hamburger of Columbia Law School makes the most sweeping general criticism: that the IRB process represents a transparently unconstitutional system of prior restraint on speech. I’m not capable of judging how well the paper reflects the case law, but the logic seems compelling: IRB control is triggered, not by a risk to subjects, but by the intention to generate and disseminate generalizable knowledge. If you’re planning to do that, you need prior permission by an agency acting under federal authority: a straightforward case of censorship.

If the requirement for IRB review extended only to federally-funded projects, it wouldn’t be so bad; like a stem-cell researcher under Bush, you could always decide to work with private money. But the rule applies to anyone doing research (including students doing term papers) at any university that receives any federal funding. And some universities extend that to any research done by any faculty member.

Another obnoxious feature of the system is the absurdly broad definitions of a “human research subject” and “personal information.” Public officials interviewed about their work are “research subjects,” and their opinions &#8212 e.g., the views of a judge about mandatory sentencing &#8212 are “personal information” about them. Thus in order to interview a judge, you need to get “informed consent,” and give the IRB a copy of your “interview protocol” so they can make sure that none of your questions might wound the judge’s feelings or get him to say something that might subject him to criticism. The answer “I’m just going to ask about mandatory sentences and see where the conversation goes from there” is not considered acceptable.

Worse yet, none of this happens in face-to-face meetings; the IRB can just keep asking dumb questions, delaying the start of your project for a month each time, until you give up. And worst of all, the IRB is completely unaccountable for interfering with research, though completely vulnerable to second-guessing by folks in Washington if it allows research and someone gets hurt. There is no requirement that the IRB give a reason for refusing permission, no deadline for it to act, and no appeal from its rulings.

It seems pretty easy to construct a remedy:

1. There should be multiple IRBs per institution, with each investigator free to pick the one whose processes are most reasonable.

2. The IRB should not be allowed to ask any question about a proposal without first identifying the possible risk to subjects that the query is supposed to elucidate. (I’ve had requests to summarize previous research.)

3. Unless the IRB states in writing a good reason for rejection, approval should be automatic after a fairly short review period, as is the case with clinical research regulated by the FDA.

4. Any delay past the deadline, or any rejection, should be subject to appeal in front of a neutral body.

Horror stories at the jump.

***********

1. I’ll start with my own. I did a study of probation enforcement in three California counties. After considerable delay because one of the “community members” of the IRB hated the criminal justice system and decided to express that hatred by blocking research about it, I finally got permission to interview probation officers and judges. But the IRB decided to “protect” the probationers themselves from being able to make their views known: if probationers were approached in the probation office, that would be “inherently coercive,” while if they were approached by phone or letter the communication might go astray and reveal their probation status to someone they didn’t want to know about it.

2. This one I was told about in person. In a study of undergraduate life, a sociologist wanted to offer students $25 for filling out a survey twice a year. The IRB decided that this constituted a “coercive offer,” since the student might be so poor that he wouldn’t feel free to turn down the $25. So to protect the students, the sociologist had to offer them only $10, leading to a poor response rate that made it almost impossible to draw any valid statistical inference from the results.

3. A reader writes: “I heard this from a friend who works in development research at another university. Her dissertation involved surveying farmers in a desperately poor African nation about their practices. She planned to hire locals to conduct the survey but found her work held up for several months as the IRB insisted that anyone involved with the study take its online human subjects research training. This despite the fact that the locals did not speak English and the nearest internet connection was hours away.”

4. This from another reader:

I’m a law student. Last year, one of my classmates observed an incident in a local restaurant in which two African-American young men were turned away at the door on what seemed to be a flimsy pretext. The explanation for turning the two young men away involved something about a dress code. Since this is not a fancy place and nobody had heard of someone from our university being turned away, it seemed suspicious. There is a pretty big town-gown divide in our small, relatively poor city and it looked like the restaurant was trying to keep African-American locals out of their establishment.

In any case, my classmate was pretty upset about what he saw. There ended up being a student meeting to discuss appropriate responses. One of the proposals made at the meeting was to send testers of different races in different styles of clothing to the restaurant over some period of time to test whether they enforced their dress code in a discriminatory manner. This seemed like a good thing to do before escalating the situation. If the testers didn’t find discriminatory behavior, it would protect the restaurant from unfair negative publicity. If the testers did find discrimination, it would provide hard evidence of discrimination to make it harder for the restaurant to deny the problem and possibly form the basis for a racial discrimination complaint.

Law school administrators at the meeting informed us that we would have to seek approval from the university’s IRB board if we wanted to use testers in this way. This is outrageous since (a) there was no academic component to the proposal, (b) this was not a university sponsored activity. Of course, IRB approval takes a long time, so there was no prospect of doing anything within a reasonable time frame.

I didn’t pursue the matter in much depth, but, given the extremely broad way that the university IRB policy was written, it seemed like the administrators could have been right in their interpretation of university policy.

4. Seth’s Blog has a funny-if-it-didn’t-happen-to-you account of what MIT did to a behavioral-economics study of the effects of sexual arousal on decision-making.

5. This story, also from Seth’s Blog, is really chilling:

At UC Berkeley a few years ago, I submitted to the animal research IRB a proposal to test with rats a key observation behind the Shangri-La Diet: Drinking sugar water caused me to lose weight. The proposal was turned down: It couldn’t possibly be true that sugar water can cause weight loss, said the IRB. Testing this idea was a waste of time.

Can you say “scientific censorship”? If you can protect the existing orthodoxy by ruling any research that threatens it as “unethical,” the scientific process is in deep, deep trouble.

6. Keith Humphreys has documented in detail how IRBs managed to chew up 18 months and 17% of the research budget on an entirely observational study in which IRB review led to no substantial change in the study design.

7. On the other hand, a GAO report shows how vulnerable the “rent-an-IRB” system is to deceit by people who want to do unethical research.

Update Another example, this one from a patient rather than an investigator:

I’ve been suffering from tendonosis in my right elbow and am now considering a somewhat experimental treatment involving platelet-rich plasma (PRP). Patient’s blood is drawn and spun in a centrifuge to obtain the PRP, which is then injected into the non-healing tendon. The idea is that PRP contains a more concentrated level of growth factors than ordinary blood. (It probably won’t be long before stem cells are added to the mix.)

I was just reading an article about a controlled study of the procedure and was surprised to find out that the study, although randomized with the control group receiving an injection of bupivacaine with epinephrine, wasn’t blind. Why? “We were unable to blind the patients because our institutional review board [at Stanford] refused to allow the drawing and discarding of a small amount of blood that would be required to fully blind the patients.”

I can’t think of a reason why it would be unethical for consenting participants to have some of their blood thrown away if they ended up in the control group. Can you?

Madness!

Second update

And here’s an example of pure political censorship.

Several years ago the new chief of general internal medicine at my hospital pointed out to me that every resident in internal medicine was a graduate of a foreign medical school, and that this had been the pattern for years. He was curious about whether our patient population was having any difficulty in communication with resident physicians who were almost all non-native speakers of English. We decided to conduct a small study of patients and physicians, and prepared a brief proposal for our hospital IRB (we are a teaching hospital associated with a major medical school).

I was very surprised when the hospital IRB rejected our proposal, with no real explanation. I contacted the chair of the IRB and was told simply that the IRB would not approve this project, and no further proposal to conduct the research would be considered.

When I reported this to our division chief, he clarified the issue. Every physician on the hospital’s IRB was a non-native speaker of English, and it was obvious that they were offended by the topic, and were possibly concerned that research would reveal communications problems.

At that point I went to the university IRB, which had no concerns at all, and didn’t even raise the point that we were not going through the hospital IRB. We conducted the little project over a month, and happily found no communications problems between foreign trained physicians and patients, in either direction. Ironically, this good outcome could have had the endorsement of the hospital IRB, had they not been such jerks.

I have regular frustrations with IRBs, but this case was certainly the most bizarre and silly that I have encountered.

Note that the lack of any review of IRB decisions enables this sort of nonsense.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com