In my first post about the American psychologist Jonathan Haidt’s book “The Righteous Mind: Why Good People Are Divided By Politics And Religion“, I explained that I was more or less in agreement with Haidt’s conclusion that human moral decision-making is driven more by emotion and intuition than by conscious reasoning. This idea is one of three major claims presented in Haidt’s book. In this post I’ll delve into the second, that intuitive moral decisions are driven by a suite of several basic values that individuals may hold to different degrees.
This is, of course, where Haidt goes beyond trying to establish that morality is intuitive and gets into the nitty-gritty of how moral intuition actually operates. His approach to this question is rooted in evolutionary psychology. Haidt describes how he and a colleague assumed that the human brain would contain evolved “cognitive modules” that mediated moral behaviour by triggering appropriate responses to certain social situations:
Craig [Joseph] and I tried to identify the best candidates for being the universal cognitive modules upon which cultures construct moral matrices. We therefore called our approach Moral Foundations Theory. We created it by identifying the adaptive challenges of social life that evolutionary psychologists frequently wrote about and then connecting those challenges to virtues that are found in some form in many cultures.
If this all sounds a bit obscure, an example should prove illuminating. One of Haidt and Joseph’s moral foundations, corresponding to a candidate cognitive module, is called “Care/Harm” (each foundation is named after a virtue and an opposing vice or evil). Haidt and Joseph thought a module like this might be needed to cope with the adaptive challenge of protecting children. They guessed that such a module would originally have been triggered by “[s]uffering, distress or neediness expressed by one’s child” but in modern life could be triggered by “baby seals” (rather unfortunately for Canada, of course) and “cute cartoon characters”. The characteristic emotion associated with activation of the module would be compassion, and the virtues linked to it would be caring and kindness. It seems worth noting at this point that, even if one doesn’t buy the evolutionary psychology framework of modules, triggers and adaptive challenges, the Care/Harm foundation clearly describes something real and recognizable in human emotional life. Haidt and Joseph’s other four original foundations, “Fairness/Cheating”, “Loyalty/Betrayal”, “Authority/Subversion” and “Sanctity/Degradation”, ring similar bells.
Working with various collaborators, Haidt went on to collect data intended to illuminate how people really felt about the five foundations, work that is still ongoing and involves questionnaires. For example, a subject might be asked whether he or she agreed that “[o]ne of the worst things a person can do is to hurt a defenseless animal”, with a positive response taken to indicate concern for the Care/Harm foundation. By asking many questions like this, it’s possible to build up a profile describing which foundations are most important to a particular individual.
At this point, you might want to take a look at a website that Haidt and his associates use to collect data and, in the process, give people some insight into their own moral intuitions. After registering, one can complete a variety of surveys, including a “Moral Foundations Questionnaire” that gets at the five foundations mentioned above. It might be fun if you were to complete the survey and consider your own scores on each foundation (0 to 5, with a higher number indicating greater attachment to the foundation in question) before reading further.
What Haidt and his research partners have been finding, naturally, is that the five foundations resonate with different people to different degrees. There’s evidence for an interesting gap between liberals and conservatives, using those terms (as Haidt naturally does) in an American sense. In the context of this five-foundation model of human morality, people who describe themselves as very liberal tend to be very concerned with Care and Fairness (taking only the virtuous half, of course, of the name of each foundation) and much less concerned with Loyalty, Authority and Sanctity. People who describe themselves as very conservative value all five foundations “more or less equally”. People in the middle are less dismissive of Loyalty, Authority and Sanctity than very liberal types, but still attach greater importance to Care and Fairness than to the other three foundations.
Of course, studies like this have their limitations and complications, many of which stem from the fact that the range of possible findings from any given questionnaire is inescapably circumscribed by the questionnaire’s design. To obtain data on the postulated Care/Harm foundation, it was necessary to write in questions designed to trigger the cognitive module associated with that foundation, like the one about hurting defenceless animals (I suppose well-defended animals, like tortoises and armadillos, are fair game). If questions like that hadn’t been included, the questionnaire would have been “blind” to Care/Harm, and it’s hard not to wonder whether some perfectly good cognitive modules are in fact being left out of the questionnaires. It strikes me as suspicious, for example, that Haidt’s model doesn’t include moral foundations for courage, honesty or reverence for nature. Conversely, the questionnaires would yield perfectly good data on the Coke/Pepsi moral foundation if they included questions that had been competently designed to get at the burning issue of which soft drink a particular subject preferred. Another problem is that the labels attached to the foundations might not precisely match the cognitive buttons that the questions on the questionnaire are really pushing, or might at least be overly broad. Some people, presumably, are very caring towards their fellow humans but not inclined to wring their hands over what happens to animals, defenceless or otherwise.
Haidt describes, in the book, how he and his collaborators did in fact revise the model to account for a problem with the Fairness foundation. Largely as a result of feedback from economic conservatives, they came around to the view that Fairness was conflating two different concerns, namely proportionality and equality. Liberals cared about equality, whereas conservatives cared about “the fairness of the Protestant work ethic and the Hindu law of karma: People should reap what they sow”. In response, Haidt and his colleagues narrowed the Fairness/Cheating foundation down to the conservative notion of fairness as proportionality, and introduced a new, sixth foundation to cover the liberal idea of fairness as equality. Curiously, though, they called this new foundation not Equality/Inequality but rather Liberty/Oppression. Haidt suggests that the original trigger of the cognitive module linked to this foundation is “signs of attempted domination” by a powerful individual, but that current triggers “include almost anything that is perceived as imposing illegitimate restraints on one’s liberty, including government (from the perspective of the American right)” and yet “can expand to encompass the accumulation of wealth, which helps to explain the pervasive dislike of capitalism on the far left.” Haidt explicitly acknowledges that the Liberty/Oppression foundation appeals to liberals and conservatives in different ways, but one nevertheless wonders if the model might be better off with at least seven foundations (the original five plus Equality/Inequality and Freedom/Constraint) as opposed to only six.
Despite these criticisms, I find Haidt’s efforts to disentangle the basis of moral intuitions basically admirable. I strongly suspect that the new six-foundation model of human morality is far from the best possible one, but it surely facilitates the documentation of some real patterns, and some real psychological differences among people with different political views. I think the approach ought to be refined, rather than rejected.
As for my own intuitions, the Moral Foundations Questionnaire – still based, at least when I filled it out, on the five-foundation model – informed me that I was more conservative than the average conservative on Loyalty (4.0/5.0), Harm (1.5) and Fairness (2.2), more liberal than the average liberal on Sanctity (1.2; called “Purity” in the version of the questionnaire that I took) and somewhere in between average liberals and conservatives on Authority (2.9). Feel free to report your own scores in the comments, but do take them with a few grains of salt, as I do mine.