A recent study asked focus groups of teenage girls about their attitudes toward alcohol. Nearly all groups pointed out the dangers of impaired judgment and increased vulnerability. However, 60% of the groups also raised a notable advantage—intoxication makes for a handy excuse when you want to do something, but don’t want to take responsibility for it. The teens noted that getting drunk before engaging in sexual activity was a way that they could demonstrate that they weren’t ‘prudes’, but avoid being labeled as ‘sluts’. As one girl said, one can “come to school and all these rumors will be started [and say], ‘Oh, I was drunk. It wasn’t my fault.’ It’s just the way you get out of something.’’

The intended message being slyly communicated is that, had the teens been acting with their full faculties, they would have done differently. This type of strategy is, of course, not unique to teenagers. Many of us are willing to dodge or withhold blame for perceived lack of control. It wasn’t her; it was the stress talking, or It was a crime of passion, or one that has become particularly popular of late: He couldn’t help himself, it’s how his brain is wired.

Though the above attempt at neuro-defense is simplistic, the concepts it alludes to are complex and fascinating. In a 2004 theoretical paper, Greene and Cohen made the argument that the spreading knowledge of neuroscience will have a transformative effect on the public’s intuitions about moral responsibility, and, as a consequence, on the public’s support for legal decisions. They submitted that growing knowledge about the brain’s mechanistic processes will slowly erode the dualist folk beliefs on which most people’s understanding of free will rests. Diminished acceptance of this idea of free will, Greene and Cohen argued, will change people’s intuitions about moral responsibility and, in turn, weaken support for punishment that depends on moral blame. This hypothesis formed the basis of a recent set of studies that my colleagues (including Greene) and I published in Psychological Science.

The four studies investigated the relationships between knowledge of brain mechanisms, attributions of free will, and judgments about punishment. We wanted to see whether exposure to knowledge of brain processes reduces the specific type of punishment that is motivated by the desire to serve the transgressor her just deserts: retributive punishment. Retributive punishment is contrasted with consequentialist punishment, the form of punishment that aims instead to serve utilitarian ends: doing whatever is best for society, independent of how much or how little the transgressor suffers herself. Consequentialist punishment doesn’t actually require the transgressor to be morally responsible for whatever she is being punished for, just that meting out the punishment has a utilitarian benefit (such as preventing future harm by removing the transgressor’s ability to re-offend or by deterring others from following suit). We don’t attack viruses because they morally deserve punishment, but instead so that their deleterious effects can be minimized. Retributive punishment, on the other hand, does theoretically depend on people being morally blameworthy for their actions, and thus morally deserving of suffering for having committed them.

So if people believe that free will is necessary for an actor to be morally blameworthy, then only retributivism, and not consequentialism, should depend on free will beliefs. Our first study confirmed that the above logic is generally consistent with people’s intuitions; beliefs in free will positively predicted support for retributive punishment, but were orthogonal to support for consequentialist punishment. Our next three studies experimentally tested whether diminishing free will beliefs—by presenting participants with cues that attribute behavior to mechanistic causes—actually reduces this retributivism.

It does. Our first of these experiments borrowed a manipulation from a 2008 paper by Vohs and Schooler (both also authors on this paper), which showed that diminishing free will beliefs increased cheating behavior.  As in their paper, we had participants read a passage from Francis Crick’s 1994 book The Astonishing Hypothesis, wherein Crick argues for a strongly reductive view of human experience. “You,” he culminates, “are nothing but a pack of neurons.” Compared to those whom we had read a neutral passage from the same book, those who had been exposed to the brunt of Crick’s reductive physicalism recommended just half the prison sentence for an assailant convicted of murder. He couldn’t help himself, it’s how his pack of neurons is arranged.

In our next experiment, we tried a subtler manipulation. Instead of exposing them to explicit arguments about what neuroscience means for philosophical concepts about free will, we had the participants read articles that described neuroscientific research, and then let the readers draw their own conclusions about the implications for free will and responsibility. The articles—made to look like they were from Scientific American and New Scientist—reviewed recent research into studies that revealed dissociations between people’s motor actions and perception of conscious intention, but made no explicit reference to concepts like free will or determinism. And as with the heavy-handed Crick passage, exposure to these neuroscience articles led people to reduce their recommendations for retributive punishment. Further, this change was mediated by a reduction in attributions of blameworthiness. As Greene and Cohen had predicted, knowledge of neuroscience changed people’s attitudes about what is blameworthy, and thus disinclined people to punish simply for the sake of punishment.

The last, and my favorite, of the studies took our prediction to a venue where many (including many of us) first developed a familiarity with the natural basis of our behavior: the Introductory Psychology classroom. At the University of Oregon, we divide our Introduction into a social/personality/developmental/clinical class (“Mind and Society”), and a cognitive neuroscience class (“Mind and Brain”). We predicted that over the course of taking Mind and Brain, students would (hopefully) make a significant leap in their neuro-literacy and, as a consequence, find their attitudes towards moral responsibility change in the process. This is indeed what happened. Students completed a questionnaire on the first day of class, and an identical one on the last day. The average prison sentence recommendation made by students saw a significant decrease over this time. On the questionnaires, students were also asked to indicate how much they felt they knew about the brain, relative to other UO students. The more this number had increased from a student’s Time 1 to Time 2 questionnaire—that is, the more they felt they increased their knowledge of the brain—the more their recommended punishment decreased (we had also run the same procedure in a geography class where students did not learn about the brain, and we saw none of these effects). So not only does it appear that students are learning the material in our intro classes, but they are also drawing connections between this material and their fundamental attitudes about morality and responsibility. Regardless of one’s own commitments to free will or retributivism, this is at least heartening from the perspective of an instructor.

Hundreds of thousands of people take introductory psychology classes every year, but knowledge of neuroscience is spreading beyond just those with formal training. Kanye West now name-drops brain structures in his rap songs. Will free will beliefs entirely disappear, and with them any motivation for retributivism? Will we abolish what Skinner called the “autonomous man — the inner man, the homunculus, the possessing demon, the man defended by the literatures of freedom and dignity”?

Unlikely. Between the illusory but intimate experiences of conscious will, and the emotional appeal of retribution, these concepts are likely to stick around. But that doesn’t mean that, as our understanding of the brain and free will are sharpened, we won’t get closer to a system of justice that is more scientifically defensible. To some degree, it’s already happening. We no longer ascertain guilt by seeing how well the accused stand up to torture or whether they drown when thrown into the sea. We now make distinctions between the form of punishment meted out to those whose criminal rationality was compromised by mental illness or immaturity. And neuroscience evidence is increasingly being cited and having an impact in the legal sphere. Between 2005 and 2011, Duke bioethicist Nita Farahany found a 300% increase in the number of judicial opinions that mentioned neuroscience evidence. Even if we never reach Skinner’s utopia, I expect Farahany’s trend to continue. The story of legal punishment has been long, but along this way, in fits and spurts, improvements in scientific understanding have helped it bend toward justice.