Friday, June 27, 2008

Lies, damn lies and statistics: death penalty edition

My blog partner recently posted a terrific piece on this week's Supreme Court ruling regarding the death penalty and child rape. I thought this was a good chance to talk about how difficult it can be to parse the data on such complex social issues. Warning: this gets a little technical, so please fight the urge to fall asleep.

In discussing the evidence on the deterrent effect of the death penalty on crime, she cites an Amnesty International fact sheet, which provides two graphs showing murder rates in states without the death penalty to be lower than those with the death penalty:





So we should conclude that not only does the death penalty not have a deterrent effect, but that it actually increases murders, right? Well, not really. These graphs show a clear correlation between the death penalty and murder rates, but they provide no information about which causes which. This type of analysis suffers from problems of "endogeneity", where you have unclear or two-way causality. So while the death penalty could be causing differences in murder rates, it is also possible that causality runs the other way; that is, states with already higher murder rates felt the need to implement the death penalty in order to reduce crime. As a result, we need more sophisticated statistical techniques to determine exactly what is happening with the data.

To overcome the endogeneity problem, empirical papers have used an "instrument variables" approach. An instrument variable is an independent variable that is correlated with the independent variable of interest in the model (in this case the use of the death penalty), but is otherwise uncorrelated with the dependent variable (in this case the murder rate). This will control for the endogenous effect of independent variable. Here's a technical explanation of how this works.

Unfortunately, as Justin Wolfers and John Donohue point out, finding a good instrument variable is not so easy. For example, one well known empirical paper on this topic used "percent of Republican voters in the previous election". There's good reason to think that this would be correlated with the explanatory variable (i.e. the death penalty), but there's also good reason to think that this would be otherwise correlated with the murder rate, violating one of the rules for instrument variables. Republicans tend to take tougher stances on crime, favoring harsher sentencing laws and a stronger police presence. As a result, states with more Republican voters may have lower murder rates, but it might not have to do with their use of the death penalty.

Why does this need to be so complicated? Well the only real way to identify the effect of the death penalty is to run a controlled experiment where, for example, you randomly assigned some states to have the death penalty and some not to. That way, the only difference between the groups of states is the death penalty. Of course, we can't do this. So we have to resort to observational studies, despite their difficulty in identifying the direction of causality.

So what does the research say? The evidence is mixed. There's a very interesting discussion between Justin Wolfers, John Donohue and Paul Rubin in The Economists' Voice, an excellent collection of essays on a wide range of public policy issues. The authors debate the results of several papers on the issue of deterrence, all of which contain conflicting results. Wolfers and Donohue, looking at the longest dataset of all the papers involved, find no net lives saved per execution. While Rubin and others disagree with this finding, I think that the preponderance of evidence in this area suggests that the death penalty has no consistent or significant deterrent effect. Even Gary Becker, Nobel Laureate and proponent of the death penalty deterrent theory, notes that, "the evidence is decidedly mixed...the weight of the positive evidence should not be overstated."

When faced with morally charged issues like the death penalty, many people look to empirics to settle the debate. While I feel that that impulse is the right one, there's a danger in overstating or oversimplifying the results of empirical research in the social sciences. Social systems are enormously complex, and sifting through reams of data requires a great deal of time, patience and humility. Statistics has a lot to teach us about politically charged issues like the death penalty; we just shouldn't be in a rush to judge what it is they have to say.

No comments: