In the past two decades, social science has painted a pretty dour picture of the power of moral reasoning. To explain why people disagree so profoundly about ethical and political questions, pundits and scientists have claimed that humans systematically disregard evidence from experts, and that we rely on gut feelings instead of reason. If true, these conclusions have pretty serious and depressing consequences. Why should politicians rely on logic or scientific evidence, if humans rarely reason about moral and political issues? Against this backdrop, it was hardly surprising when a leading psychologist told a Washington Post columnist in 2011 that it ‘is important for the president not to be rational and fully honest’.
According to this pessimistic view, most of our moral judgments spring from automatic, unconscious and affective reactions. When we feel disgust toward someone, our disgust is what leads us to condemn their actions. Conversely, according to this theory, moral reasoning rarely shapes our moral judgments, but rather serves to justify our emotion-based judgments after the fact.
In one well-known study from 2005, researchers hypnotised participants to be disgusted by a seemingly innocuous act: a student trying to select popular topics for school debates. Participants were then asked how morally wrong the student’s actions were. Those hypnotised to be disgusted rated the action as morally worse than their peers. The researchers reported that disgusted participants were unable to provide compelling reasons for why the student’s action was wrong. Reflecting on their findings, the researchers concluded that the findings illustrated how ‘reason is, and ought only to be, the slave of the passions’, as the philosopher David Hume wrote in A Treatise of Human Nature (1739-40). Indeed, if the mere feeling of disgust can lead to moral condemnation, reasoning would be relegated to a supporting role, at best.
But is this pessimistic perspective the right one? To sort out the place of reason in human morality, we need some clarity about just what we mean by moral reasoning. The philosopher Jonathan Adler, in the introductory chapter to his edited 1,000-page tome Reasoning (2012), defined the titular process as ‘a transition in thought, where some beliefs (or thoughts) provide the ground or reason for coming to another’. Moral reasoning is a specific type of reasoning, by which moral principles provide the grounds for moral judgments.
Consider the following illustration: I think that intentionally hurting others is generally wrong (a moral principle), and I believe that my friend has intentionally hurt someone (a belief). If this combination of principle and belief leads me to judge that my friend has done something wrong, I will have engaged in moral reasoning. Note that this definition doesn’t require that reasoning be conscious or slow. In fact, most researchers who study reasoning believe that it can be both slow and fast, conscious and unconscious.
Three-year-olds protested vigorously when a puppet tried to destroy a picture that another puppet had drawn
To ask whether people reason about moral issues, we need to answer two kinds of questions. Firstly, what kinds of moral principles and beliefs do people hold at the outset? And secondly, do people form moral judgments based on those prior principles and beliefs – that is, do humans form moral judgments that align with their moral principles and beliefs? It turns out that they do, from a surprisingly young age.
For decades, research on children – unlike research on adults – has overwhelmingly concluded that participants do reason about moral issues. (Strangely, psychological research often portrays children more favourably than it does adults.) In one classic study from the 1980s, researchers interviewed six- to 10-year-old children in the United States. They asked about several fictional moral violations: for instance, a child who pushed another child off the top of a slide. When asked why pushing was wrong, children typically explained that it could hurt the victim. Accordingly, most children said that pushing would still be wrong even if adults had given permission. That is, children embraced the principle that pushing was wrong because it caused harm and, consistent with this principle, judged that pushing was wrong, whether adults gave permission or not.
A different pattern of reasoning emerged when the researchers interviewed children about violations of social conventions, such as where a child ate dinner with her fingers. In the interviews, the children typically explained that this was wrong because it went against adult prohibitions or traditions. Accordingly, most children said that eating dinner with your fingers would be OK if adults gave permission. Here, children expressed the principle that eating with your fingers is wrong insofar as it violates social norms – and consistent with this principle, judged that eating with your fingers was OK when there were no social norms against it. For both moral and conventional violations, children reasoned by making judgments based on their general principles.
Since then, hundreds of studies have shown that children form judgments based on processes of moral reasoning. Children as young as three to four years reason that hitting or stealing is wrong because it violates fundamental moral principles. Children engage in moral reasoning not only about fictional stories but also about video recordings of real-life conflicts in preschools. And children’s judgments spur actions and emotional reactions when they perceive a moral violation. In one study, three-year-olds protested vigorously when a puppet tried to destroy a picture that another puppet had drawn. Most children protested against the moral violation, for instance by telling the destructive puppet to stop. In short, we know that, from preschool-age onwards, children form moral judgments based on moral principles that they can articulate. They engage in moral reasoning.
The capacity for moral reasoning, far from disappearing, continues to develop as children grow up. By adolescence, they can reason about highly complex moral issues, such as societal inequalities or life-and-death dilemmas. A few years ago, my colleagues and I interviewed adolescents and adults about a series of dilemmas known as ‘trolley problems’. In one trolley problem, you’re standing on a footbridge overarching a train track, next to a large stranger. A runaway train is hurtling down the track toward five railway workers, who will be killed unless you intervene. The only way to save the five workers is to push the stranger off the footbridge and onto the track, which would stop the train, save the workers, but kill the stranger. Is it okay to push the stranger onto the track?
Most participants judged that it would not be okay to push the stranger. The judgment that it was wrong to sacrifice one life to save five others had seemed irrational to many scientists: psychological and neuroscientific research on trolley problems was read as revealing that moral judgments were based on automatic, emotional reactions – consistent with the pessimistic view of moral reasoning. Yet few studies directly investigated whether people reasoned about the trolley problems.
No amount of disgust can make you judge that saving a drowning child is wrong
Our analyses of adolescents’ and adults’ responses offered extensive evidence of moral reasoning about trolley problems. A striking feature of participants’ reasoning was their ability to balance competing moral principles. Participants who judged that it would be wrong to sacrifice the stranger to save the five workers didn’t fail to count the number of lives involved. Rather, they decided to prioritise other moral principles at play. Many participants thought it was wrong to bring the innocent stranger onto the bridge and into a situation of mortal danger. One participant explained that it would be wrong to ‘kill that person who isn’t a railroad worker, who has nothing to do with railroads or trains’. A follow-up study showed that participants’ judgments were indeed guided by moral reasoning about innocence. When the stranger on the footbridge was not an innocent victim but a person who’d set off the runaway train in the first place, most people thought it would be OK to push him off to save the five workers. Our participants formed judgments based on their expressed moral principles.
Our capacity to reason about competing moral principles develops from childhood to adulthood. But the ability to form moral judgments based on the principles we claim to endorse is evident from preschool to adulthood. Sometimes we violate our expressed principles – recently, our lab has studied academic cheating, which most students do despite thinking that cheating is generally wrong. And sometimes we violate one moral principle in order to honour another. Still, principles about how we ought to treat others remain a powerful guide to our moral judgments, emotions and actions across the lifespan.
What about all the empirical evidence that people rely on gut feelings rather than reasoning? Recently, the case against moral reasoning has begun to unravel. It turns out that the effects of gut feelings on moral judgments range from small to nonexistent. Even if being disgusted makes you judge moral violations slightly more harshly, no amount of disgust can make you judge that saving a drowning child is wrong. Other critics argued that studies purporting to show that adults are unable to explain their moral judgments – so-called ‘moral dumbfounding’ – suffered from methodological limitations. When those limitations were removed, researchers found little or no evidence for moral dumbfounding. Lastly, although emotions are integral to our moral sense, emotions and thoughts are more intertwined than researchers once assumed.
The pessimistic views of moral reasoning sought to explain why people disagree about moral and political issues, such as immigration or abortion. It suggested that people disagree because they have automatic emotional reactions that are insensitive to reasons. However, moral disagreements and emotions can occur alongside reasoning. Even scientists, who dedicate their lives to reasoning about evidence, can passionately disagree.
Research on moral reasoning offers alternative explanations for such disputes. One major contributor is the divergent factual beliefs that underlie moral reasoning. Viewers of Fox News are exposed to dramatically different facts than are viewers of CNN. (Sadly, promoting false information can be lucrative, both financially and politically.) These beliefs have implications that inform our moral and political reasoning. If you genuinely believe that a wall on the border with Mexico will greatly benefit jobs and crime prevention in the US, you will likely judge the wall more favourably than if you believe that a wall would be not only inhumane but ineffective. A second contributor to disagreement is the way in which people weigh competing principles. In debates about abortion, both pro-choice and pro-life advocates recognise the worth of the unborn and the welfare of the mother. The disagreement isn’t about whether those two competing considerations are morally relevant, but about how to balance them against each other. Research on the development of moral reasoning can shed light on how people prioritise moral principles as they go from childhood to adulthood.
Recognising the place of moral reasons, not just emotions, offers a starting point for mutual understanding. Despite our inevitable disagreements, humans show a shared capacity for moral reasoning from a remarkably early age. Without this capacity, notions of human rights and social justice would be unimaginable. But with this capacity, what Martin Luther King called ‘the arc of the moral universe’ can, slowly, continue to zigzag toward justice.