Natural Rationality | decision-making in the economy of nature

7/27/07

The moral stance: a brief introduction to the Knobe effect and similar phenomena

An important discovery in the new field of Experimental Philosophy (or "x-phi", i.e., "using experimental methods to figure out what people really think about particular hypothetical cases" -Online dictionary of philosophy of mind) is the importance of moral beliefs in intentional action attribution. Contrast these two cases:

[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
A and B are identical, except that in one case the program harms the environment and in the other case it helps it. Subjects were asked whether the chairman of the board intentionally harm (A) or help (B) the environment. Since the two cases have the same belief-desire structure, both actions should be seen as intentional, whether it is right or wrong. It turns out that in the "harm" version, most people (82%) say that the chairman intentionally harm the environment; in the "help" version, only 23% say that the chairman intentionally help the environment. This effect is call the "Knobe effect", because it was discovered by philosopher Joshua Knobe. In a nutshell it means that

people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".

A related post on the x-phi blog by Sven Nyholm describes a similar experiment. The focus was not intention, but happiness. The two versions were

[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Subjects were asked whether they agreed or disagreed with the sentence "Richard is happy" (on a scale from 1=disagree to 7=agree). Subjects slightly agrees (4.6/7) in the morally good condition (A) whereas they slightly disagrees (3.5/7) in the morally bad condition (B), and the difference is statistically significant. Again, the concept of "being happy" is partly moral-normative.

A related phenomena has been observed in an experimental study of generosity recently published: generous behavior is also influenced by moral-normative beliefs (Fong, 2007). In this experiment, donors had to decide how much of a $10 dollars "pie" they want to transfer to a real-life welfare recipient (and keep the rest: thus it is a Dictator game). They read information about the recipients (who had to fill a questionnaire before). They we asked about their age, race, gender, etc. The three recipients had a similar profile, except for their motivation to work. In the last three questions:

  • If you don't work full-time, are you looking for more work? ______Yes, I am looking for more work. ______No, I am not looking for more work.
  • If it were up to you, would you like to work full-time? ______Yes, I would like to work full-time. ______No, I would not like to work full-time.
  • During the last five years, have you held one job for more than a one-year period? Yes_____ No_____
one replied Yes to all ("industrious"), one replied No ("lazy"), and the other did not reply ("low-information"). Donors made their decision and money was transferred for real (btw, that's one thing I like about experimental economics: there is no deceptions, no as-if: real people receive real money). Results:

Lazy-recipient, low-information-recipient, and industrious-recipient received an average of $1.84, $3.21, and $2.79, respectively. The ant and the grasshopper! (" You sang! I'm at ease/
For it's plain at a glance/Now, ma'am, you must dance."). As the author says:

Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers

[In another research paper Pr. Fong also found different biases in giving to Katrina victims.]

An interesting--and surprising--finding of this study is also that this "ant effect" ("you should deserve help") was stronger in people who scored higher of humanitarianism beliefs. They don't give more than others when recipients are deemed to be poor because of laziness (another reason not to trust what people say about themselves, and look at their behavior instead). Again, a strong moral-normative effect on beliefs and behavior. Since oxytocin increases generosity (see this post) and this effect is due to a greater empathy induced by oxytocin, I am curious to see if people in the lazy vs. industrious experiment after oxytocin inhalation would become more sensible to the origin of poverty (bad luck or lazyness). If bad luck inspires more empathy, then I guess yes.

Man the moral animal?

Morality seems to be deeply entrenched in our social-cognitive mechanims. One way to understand all these results is to posit that we routinely and usually interpret each other from the "moral stance", not the intentional one. The "intentional stance", as every Philosophy of mind 101 course teaches us, is the perspective we adopt when we deal with intentional agents (agents who entertain beliefs and desires). We explain and predict action based on their rationality and the mental representations they should have, given the circumstances. In other words, it's the basic toolkit for being a game-theoretic agent. Philosophers (Dennett in particular) contrast this stance with the physical and the design stance (when we talk about an apple that falls or the function of the "Ctrl" key in a computer, for instance). I think we should introduce a related stance, the moral stance. Maybe--but research will tell us--this stance is more basic. We switch to the purely intentional stance when, for instance, we interact with computer in experimental games. Remember how subjects don't care about being cheated by a computer in the Ultimatum Game: they have no aversive feeling (i.e. insular activation) when the computer make an unfair offer (see a discussion in this paper by Sardjevéladzé and Machery). Hence they don't use the "moral stance", but they still use the intentional stance. Another possibility is that the moral stance might explains why people deviate from standard game-theoretical predictions: all these predictions are based on intentional-stance functionalism. This stance applies more to animals, psychopaths or machines than to normal human beings. And also to groups: in many games, such as the Ultimatum of the Centipede, groups behave more "rationally" than individuals (see Bornstein et al., 2004; Bornstein & Yaniv, 1998; Cox & Hayne, 2006), that is, they are closer to game-theoretic behavior (a point radically develop in the movie The Corporation: firms lack moral qualities). Hence the moral stance may have particular requirements (individuality, emotions, empathy, etc.).


References:
Links: