One of the more interesting things I read during Canadian Atheist’s recent stint in the infirmary was The Righteous Mind: Why Good People Are Divided By Politics and Religion, by the American psychologist Jonathan Haidt. It’s an intellectually rich book that covers a lot of ground, but Haidt builds it around three main claims: that people tend to make moral judgements intuitively and subsequently try to back them up with rational arguments, that moral intuitions involve several basic values which individuals may hold to different degrees, and that shared “moral narratives” can both “bind” people into cooperative groups and “blind” them to alternative narratives that may be preferred by other groups.
What Haidt is doing here is moral psychology, not moral philosophy. He’s neither arguing for some theory of objective goodness nor trying to convince his readers that no such theory could be valid, but rather investigating how real people make moral decisions and theorizing about why the human moral sense works as it does. The fact that Haidt is an avowed atheist, albeit a mild-mannered one who thinks religion has considerable social value, is critical at this juncture. He sees human morality as a product of biological and cultural evolution, rather than a divine gift or a set of truths stamped on the fabric of the universe. From this clear-eyed perspective he succeeds in drawing conclusions about the nature and origin of morality that are interesting and provocative, and in most major respects I find them highly convincing. There’s no hope of squeezing a reasonable summary of Haidt’s views and my responses to them into a single post, so I’ll devote a separate one to each of his major claims. The first, that morality is functionally based more on emotion and intuition than conscious reasoning, is a conclusion that I find easy to accept because I’ve long held a similar view myself. However, Haidt makes the case beautifully and comes equipped with actual data to back up his points, which is definitely an advantage.
The data arise from experiments that Haidt and his students and collaborators conducted over the years, most of which involved telling people stories and having them decide whether the character(s) in the story had committed a moral violation. The most striking finding Haidt mentions is that experimental subjects are often “morally dumbfounded” by stories about behaviours that violate social taboos but are not clearly harmful to anyone, such as sex with a dead chicken or consensual, non-procreative incest between an adult brother and sister – that is, they quickly judge the characters’ actions to be immoral, but then struggle to articulate reasons beyond empty statements like “I just think that’s wrong”. As Haidt puts it:
[I]t’s obvious that people were making a moral judgment immediately and emotionally. Reasoning was merely the servant of the passions, and when the servant failed to find any good arguments, the master did not change his mind.
Haidt also notes that distracting people with concurrent tasks or requiring them to respond to questions instantly doesn’t affect their moral judgement, but that manipulating their environment in seemingly irrelevant ways sometimes does. There appears to be a link between thinking about morality and thinking about cleanliness, such that people who are asked to wash their hands before answering a questionnaire tend to be “more moralistic about issues related to moral purity (such as pornography and drug use)”. All this adds to the case that moral decisions are not usually made by deliberate conscious reasoning.
Haidt is big on metaphors, even if he doesn’t deploy them with quite the virtuosity of a Dennett or a Dawkins. He compares the rational mind, in the context of moral decision-making, to a rider perched on top of an elephant that represents emotion and intuition. The rider has a bit of influence, but the elephant is a powerful animal that will go more or less where it wants to, using the poor rider as “a full-time public relations firm” to justify its actions after the fact:
Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new and inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant.
As Haidt acknowledges, this is not a particularly new perspective. Haidt barely mentions Freud in this connection, but Haidt’s mighty elephant sounds a lot like Freud’s mighty subconscious. Haidt does quote David Hume’s remark that “reason is, and ought only to be the slave of the passions”, but insists on changing “slave” to the “less offensive and more accurate term servant” when subsequently referring back to the idea, as in the first long quote from the book given above. May Russell’s Holy Teapot preserve us from such squeamishness.
Of course, prudent masters listen to their slaves from time to time. Surely it’s not true that moral judgements are purely intuitive and emotional – Haidt himself suggests that “an educated and politically liberal Westerner” would probably be inclined to evince some distaste for sex with a dead chicken or whatever, but nevertheless hesitate to condemn such outré but literally harmless actions as immoral. That’s a case of the rider’s reasoned judgement overcoming the elephant’s instinctive aversion. Similarly, Tauriq Moosa wrote a few months ago about being “disgusted” by hunting (yes, squeamishness is everywhere these days) but “uncertain that it’s actually always wrong”, thus demonstrating his ability to rein in his rampaging elephant. Haidt acknowledges this sort of thing in a section of the book called “Elephants are sometimes open to reason”, but says explicitly that he thinks it’s “rare” for people to “reason their way to a moral conclusion that contradicts their initial intuitive judgment” without being persuaded by others or being morally conflicted at an intuitive level. I’m skeptical that it’s really all that rare, but I also believe that whatever principles one might use to overturn an intuitive judgement must themselves be adopted ultimately on the basis of emotion rather than reason.
At the end of the day, then, I’m basically with Hume and Haidt on this point. With no magic fruit on hand to provide Knowledge of Good and Evil, we unfortunate mortals need to fall back on our own judgement, and the foundations of that judgement will necessarily be emotional, intuitive and subjective. It’s a good thing that all humans have broadly similar cognitive equipment, except in rare pathological cases, and are therefore likely to arrive at moral judgements that are at least intelligible to other members of the species. Jonathan Haidt’s book has quite a bit to say about how the elephant-dominated system of human moral decision-making works in practice, but that will have to wait for the next post.