Natural Rationality | decision-making in the economy of nature
Showing posts with label reciprocity. Show all posts
Showing posts with label reciprocity. Show all posts

9/22/07

Strong reciprocity, altruism and egoism

Proponents of the Strong Reciprocity Hypothesis (i.e., Bowles, Gintis, Boyd, Fehr, Heinrich, etc., I will call them “The Collective”) claim that human being are strong reciprocators: they are willing to sacrifice resources in order to reward fair and punish unfair behavior even if there is no direct or future reward. Thus we are, according to the Collective, innately endowed with pro-social preferences and aversion to inequity. Those who advocate strong reciprocity take it to be a a ‘genuine’ altruistic force, not explained by other motives. Strong reciprocity is here contrasted with weaker form of reciprocity, such as: cooperating with someone because of genetic relatedness (kinship), because one follows a tit-for-tat pattern (direct reciprocity), wants to establish a good reputation (indirect reciprocity) or displays signs of power or wealth (coslty signaling). Thus our species is made, ceteris paribus, of altruistic individuals that tend to cooperate with cooperators and punish defectors, even at a cost. Behavioral economics showed how people are willing to cooperate in games such as the prisoner’s dilemma, the ultimatum game or the trust game: they do not cheat in the first one, offer fair split in the second and transfer money in the third.

Could it be possible, however, that this so-called altruism is instrumental? I don’t think it is always, some cases require closer scrutiny. For instance, in the Ultimatum Game, there is a perfectly rational and egoist reason to make a fair offer, such as a 50-50 split: it is the best—from one’s point of view—solution to the trade-off between making a profit and proposing a split that the other player will accept: if you propose more, you loose more money: if you propose less, you risk a rejection. In non-market integrated culture where a 20-80 split is not seen as unfair, proposers routinely offer such splits, because they know it will be accepted.

It can also be instrumental in a more basic sense, for instance in participating to the propagation of our genes. For instance, (Madsen et al., 2007) showed that individuals behave more altruistically toward their own kin when there is a significant genuine cost (such as pain), an attitude also mirrored in study with questionnaires (Stewart-Williams, 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Finally, other studies showed that facial resemblances enhance trust (DeBruine, 2002). In each cases, we see a mechanisms whose function is to negotiate our investments in relationships in order to promote the copies of our genes housed in people who are, or look like, or could help us expand our kin. For instance, by simply viewing lingerie or picture of sexy women, men behave more fairly in the ultimatum game (Van den Bergh & Dewitte, 2006).

Many of these so-called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton et al., 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al., 1994). According to the present framework, it would be because there is no advantage in being fair.

When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson et al., 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks(Bering et al., 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff & Norenzayan, in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban et al., 2007) showed that with a dozen participants, punishment expenditure tripled. In the trust game, players apply learned social rules and trust-building routines, but they hate when cheater enjoy what they themselves refrain from enjoying. Thus it feels good to reset the equilibrium. Again, appareant altruism is instrumental in personal satisfaction, at least in some occasions.

Hardy & Van Vugt, in their theory of competitive altruism suggest that

individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable (Hardy & Van Vugt, 2006)

Maybe agents are attempting to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social. A possible alternative approach is what I call ‘methodological hedonism’: let’s assume, at least for identifying cognitive mechanisms, that the brain, when in function normally, tries to maximize hedonic feelings, even in moral behavior. We use feelings to anticipate feelings in order to control our behavior toward a maximization of positive feelings and a minimization of negative ones. The ‘hot logic’ of emotions is more realist than the cold logic of traditional game theory but still preserve the idea of utility maximization (although “value” would be more appropriate). In this framework, altruistic behavior is possible, but need not to rely on altruistic cognition. Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. The initial hedonism is gradually modulated by social norms, by which agents learn how to maximize their utility given the norms. Luckily, however, biological and cultural evolution favored patterns of self-interest that promote social order to a certain extent: institutions, social norms, routines and cultures tend to structure morally our behavior. Thus understanding morality may amount to understand how individual’s egoism is modulated by social processes. There might be no need to posit an innate Strong Reciprocity. Or at least it is worth to explore other avenues!


Related posts



Suggested reading:


Update:

I forgot to mention a thorough presentation and excellent criticism of Strong Reciprocity:

important papers from the Collective are:
  • Bowles, S., & Gintis, H. (2004). The Evolution of Strong Reciprocity: Cooperation in Heterogeneous Populations. Theoretical Population Biology, 65(1), 17-28.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong Reciprocity, Human Cooperation, and the Enforcement of Social Norms. Human Nature, 13(1), 1-25.
  • Fehr, E., & Rockenbach, B. (2004). Human Altruism: Economic, Neural, and Evolutionary Perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.


References


  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Bering, J. M., McLeod, K., & Shackelford, T. K. (2005). Reasoning About Dead Agents Reveals Possible Adaptive Trends. Human Nature, 16(4), 360-381.
  • Bolton, G. E., Katok, E., & Zwick, R. (1998). Dictator Game Giving: Rules of Fairness Versus Acts of Kindness International Journal of Game Theory, 27 269-299
  • DeBruine, L. M. (2002). Facial Resemblance Enhances Trust. Proc Biol Sci, 269(1498), 1307-1312.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hardy, C. L., & Van Vugt, M. (2006). Nice Guys Finish First: The Competitive Altruism Hypothesis. Pers Soc Psychol Bull, 32(10), 1402-1413.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Kurzban, R., DeScioli, P., & O'Brien, E. (2007). Audience Effects on Moralistic Punishment. Evolution and Human Behavior, 28(2), 75-84.
  • Madsen, E. A., Tunney, R. J., Fieldman, G., Plotkin, H. C., Dunbar, R. I. M., Richardson, J.-M., & McFarland, D. (2007). Kinship and Altruism: A Cross-Cultural Experimental Study. British Journal of Psychology, 98, 339-359.
  • Shariff, A. F., & Norenzayan, A. (in press). God Is Watching You: Supernatural Agent Concepts Increase Prosocial Behavior in an Anonymous Economic Game. Psychological Science.
  • Stewart-Williams, S. (2007). Altruism among Kin Vs. Nonkin: Effects of Cost of Help and Reciprocal Exchange. Evolution and Human Behavior, 28(3), 193-198.
  • Van den Bergh, B., & Dewitte, S. (2006). Digit Ratio (2d:4d) Moderates the Impact of Sexual Cues on Men's Decisions in Ultimatum Games. Proc Biol Sci, 273(1597), 2091-2095.




7/31/07

Understanding two models of fairness: outcome-based inequity aversion vs. intention-based reciprocity

Why are people fair? Theoretical economics provides two generic models that fits the data. According to the first, inequity aversion, people are inequity-averse: they don't like a situation where one agent is disadvantaged over another. This model is based on consequences. The other model is based on intentions: although the consequences of an action are important, what matters here is the intention that motivates the action. I won't discuss which approach is better (it is an ongoing debates in economics), but I just wanted to share with a extremely clear presentations of the two parties, found in van Winden, F. (2007). Affect and Fairness in Economics. Social Justice Research, 20(1), 35-52., on pages 38-39:

In inequity aversion models (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999), which focus on the outcomes or payoffs of social interactions, any deviation between an individual's payoff and the equitable payoff (e.g., the mean payoff or the opponent's payoff) is supposed to be negatively valued by that individual. More formally, the crucial difference between an outcome-based inequity aversion model and the homo economicus model is that, in addition to the argument representing the individual's own payoff, a new argument is inserted in the utility function showing the individual's inequity aversion (social preferences), as in the social utility model (see, e.g., Handgraaf et al., 2003; Loewenstein et al., 1989; Messick and Sentis, 1985). The individual is then assumed to maximize this adapted utility function.

In intention-based reciprocity models it is not the outcomes of the interaction as such that matter, but the intentions of the players (Rabin, 1993; see also Falk and Fischbacher, 2006). The idea is that people want to reciprocate perceived (un)kindness with (un)kindness, because this increases their utility. Obviously, beliefs play a crucial role here. More formally, in this case, in addition to an individual's own payoff a new argument is inserted in the utility function incorporating the assumed reciprocity motive. As a consequence, if someone is perceived as being kind it increases the individual's utility to reciprocate with being kind to this other person. Similarly, if the other is believed to be unkind, the individual is better off by being unkind as well, because this adds to her or his utility. Again, this adapted utility function is assumed to be maximized by the individual.





7/27/07

The moral stance: a brief introduction to the Knobe effect and similar phenomena

An important discovery in the new field of Experimental Philosophy (or "x-phi", i.e., "using experimental methods to figure out what people really think about particular hypothetical cases" -Online dictionary of philosophy of mind) is the importance of moral beliefs in intentional action attribution. Contrast these two cases:

[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
A and B are identical, except that in one case the program harms the environment and in the other case it helps it. Subjects were asked whether the chairman of the board intentionally harm (A) or help (B) the environment. Since the two cases have the same belief-desire structure, both actions should be seen as intentional, whether it is right or wrong. It turns out that in the "harm" version, most people (82%) say that the chairman intentionally harm the environment; in the "help" version, only 23% say that the chairman intentionally help the environment. This effect is call the "Knobe effect", because it was discovered by philosopher Joshua Knobe. In a nutshell it means that

people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".

A related post on the x-phi blog by Sven Nyholm describes a similar experiment. The focus was not intention, but happiness. The two versions were

[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Subjects were asked whether they agreed or disagreed with the sentence "Richard is happy" (on a scale from 1=disagree to 7=agree). Subjects slightly agrees (4.6/7) in the morally good condition (A) whereas they slightly disagrees (3.5/7) in the morally bad condition (B), and the difference is statistically significant. Again, the concept of "being happy" is partly moral-normative.

A related phenomena has been observed in an experimental study of generosity recently published: generous behavior is also influenced by moral-normative beliefs (Fong, 2007). In this experiment, donors had to decide how much of a $10 dollars "pie" they want to transfer to a real-life welfare recipient (and keep the rest: thus it is a Dictator game). They read information about the recipients (who had to fill a questionnaire before). They we asked about their age, race, gender, etc. The three recipients had a similar profile, except for their motivation to work. In the last three questions:

  • If you don't work full-time, are you looking for more work? ______Yes, I am looking for more work. ______No, I am not looking for more work.
  • If it were up to you, would you like to work full-time? ______Yes, I would like to work full-time. ______No, I would not like to work full-time.
  • During the last five years, have you held one job for more than a one-year period? Yes_____ No_____
one replied Yes to all ("industrious"), one replied No ("lazy"), and the other did not reply ("low-information"). Donors made their decision and money was transferred for real (btw, that's one thing I like about experimental economics: there is no deceptions, no as-if: real people receive real money). Results:

Lazy-recipient, low-information-recipient, and industrious-recipient received an average of $1.84, $3.21, and $2.79, respectively. The ant and the grasshopper! (" You sang! I'm at ease/
For it's plain at a glance/Now, ma'am, you must dance."). As the author says:

Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers

[In another research paper Pr. Fong also found different biases in giving to Katrina victims.]

An interesting--and surprising--finding of this study is also that this "ant effect" ("you should deserve help") was stronger in people who scored higher of humanitarianism beliefs. They don't give more than others when recipients are deemed to be poor because of laziness (another reason not to trust what people say about themselves, and look at their behavior instead). Again, a strong moral-normative effect on beliefs and behavior. Since oxytocin increases generosity (see this post) and this effect is due to a greater empathy induced by oxytocin, I am curious to see if people in the lazy vs. industrious experiment after oxytocin inhalation would become more sensible to the origin of poverty (bad luck or lazyness). If bad luck inspires more empathy, then I guess yes.

Man the moral animal?

Morality seems to be deeply entrenched in our social-cognitive mechanims. One way to understand all these results is to posit that we routinely and usually interpret each other from the "moral stance", not the intentional one. The "intentional stance", as every Philosophy of mind 101 course teaches us, is the perspective we adopt when we deal with intentional agents (agents who entertain beliefs and desires). We explain and predict action based on their rationality and the mental representations they should have, given the circumstances. In other words, it's the basic toolkit for being a game-theoretic agent. Philosophers (Dennett in particular) contrast this stance with the physical and the design stance (when we talk about an apple that falls or the function of the "Ctrl" key in a computer, for instance). I think we should introduce a related stance, the moral stance. Maybe--but research will tell us--this stance is more basic. We switch to the purely intentional stance when, for instance, we interact with computer in experimental games. Remember how subjects don't care about being cheated by a computer in the Ultimatum Game: they have no aversive feeling (i.e. insular activation) when the computer make an unfair offer (see a discussion in this paper by Sardjevéladzé and Machery). Hence they don't use the "moral stance", but they still use the intentional stance. Another possibility is that the moral stance might explains why people deviate from standard game-theoretical predictions: all these predictions are based on intentional-stance functionalism. This stance applies more to animals, psychopaths or machines than to normal human beings. And also to groups: in many games, such as the Ultimatum of the Centipede, groups behave more "rationally" than individuals (see Bornstein et al., 2004; Bornstein & Yaniv, 1998; Cox & Hayne, 2006), that is, they are closer to game-theoretic behavior (a point radically develop in the movie The Corporation: firms lack moral qualities). Hence the moral stance may have particular requirements (individuality, emotions, empathy, etc.).


References:
Links:



7/25/07

More than Trust: Oxytocin Increases Generosity

It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo).
They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.


Interestingly, OT subjects did not turn into pure altruist: they make offers (mean $3.77) in the Dictator game similar to placebo subjects (mean $3.58, no significant difference). Thus the motive is neither direct nor indirect reciprocity (Ultimatum were blinded one-shot so there is no tit-for-tat or reputation involved here). It is not pure altruism, according to Stanton et al., (or "strong reciprocity"--see this post on the distinction between types of reciprocity) because the threat of the MinAccept compels players to make fair offers. They conclude that generosity in enhanced because OT affects empathy. Subjects simulate the perspective of the other player in the Ultimatum, but not in the Dictator. Hence, generosity "runs" on empathy: in empathizing context (Ultimatum) subjects are more generous, but in non-empathizing context they don't--in the dictator, it is not necessary to know the opponent's strategy in order to compute the optimal move, since her actions has no impact on the proposer's behavior. It would be interesting to see if there is a different OT effect in basic vs. reenactive empathy (sensorimotor vs. deliberative empathy; see this post).

Interested readers should also read Neural Substrates of Decision-Making in Economic Games, by one of the author of the study (Stanton): in her PhD Thesis, she desribes many neurpeconomic experiences.

[Anecdote: I once asked people of the ESA why they call their society like that: all presented papers were experimental, so I thought that the name should reflect the empirical nature of the conference. They replied judiscioulsy : "Because we think that it's how economics should be done"...]

References



7/3/07

The reciprocal rat

Research on reciprocity classically identified three cooperation mechanisms: 
  • kin reciprocity (A helps B because A and B are genetically related)
  • direct reciprocity (A helps B because B has helped A before--"tit for tat")
  • indirect reciprocity (A helps B because B has helped C before)
Recently, many suggested that we should also add a stronger type of reciprocity called, well,  "Strong reciprocity". Strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Strong reciprocity is a uniquely human phenomenon. Until now, only kin and direct reciprocity has been observed in animals. In PloS Biology,  Rutte and Taborsky showed that another reciprocity mechanism could be shared by humans and other animals is present in rats: generalized reciprocity. Contrarily to other kinds of reciprocity, generalized reciprocity does not requires individual identification: in kin, direct and indirect reciprocity, you need first to identify another agent as sharing genes with you, having helped you in the past, or having helped someone else in the past. Generalized reciprocity is more anonymous: since someone help you in the past, you are more willing to help someone in the future regardless of the past and futur agent's identity. People who found a coin in a phone booth are more likely to help a stranger pick up dropped papers than control subjects who had not previously found money. Since you don't need to track and identify other's behaviors,  generalized reciprocity is less cognitively demanding, and hence probably most common in nature. In Rutte and Taborsky's experiment, a rat can pull a stick fixed to a baited tray and produces food (an oat flake) for its 'partner' (another rat); the partner is rewarded but not the 'giver'. It turns out that rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Rats followed a " “anonymous generous tit-for-tat”. 

 


 




Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196

  • Fehr, E., & Rockenbach, B. (2004). Human altruism: economic, neural, and evolutionary perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1-25.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.



5/18/07

The psychopath, the prisoner's dilemma and the invisible hand of morality

In the prisoner’s dilemma the police holds, in separate cells, two individuals accused of robbing a bank. The suspects (let’s call them Bob and Alice) are unable to communicate with each others. The police offers them the following options: confess or remain silent. If one confesses – implicating his or her partner – and the other remains silent, the former goes free while the other gets a 10 years sentence. If they both confess, they will serve a 5-years sentence. If they both remain silent, the sentence will be reduced to 2 years. The situation can be represented as the following payoff matrix:



Assuming that Bob and Alice have common knowledge – everybody knows that everybody knows that everybody knows, etc., ad infinitum – of each other’s rationality and the rules of the game, they should confess. Indeed, they will expect each other to make the best move, which is confessing, since confessing gives you either freedom or a 5-years sentence, while remaining silent entails either a 2-years or a 10-years sentence. The best reply to this move is also confessing, thus we expect Bob and Alice to confess. Even if they would be better-off in remaining silent, this choice is suboptimal: they risk a 10-years sentence if the other does not remain silent. In other words, they should not choose the cooperative move.

Experimental game theory indicates that subjects cooperate massively in prisoner’s dilemma. Recently, neuroeconomics showed that players enjoy cooperating, what economists refer to as the “warm glow of giving”. In the prisoner’s dilemma, players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas (Rilling et al. 2002).

In a new paper, Rilling and its collaborators (2007) now investigate how psychopathy influences cooperation in the prisoner's dilemma. Their subjects were not psychopaths per se: instead, they used normal individuals and--with a questionnaire--rated their attitudes on a "psychopathy scale". While in a scanner, they were then asked to play a prisoner's dilemma with nonscanned subjects. They were in fact playing against a computer following the "forgiving tit-for-tat" strategy", analogous to tit-for-tat excepts that it reciprocates previous defection only 67% of the time.

Behavioral results indicate that psychopathy is correlated with defection, even after mutual cooperation. One explanation could be that psychopaths have impaired amygdala, and hence are less sensible to aversive conditioning. This is coherent with fMRI data that suggests that the Cooperate-Defect outcome (I cooperate, you defect) elicit less aversive reaction in individual who score higher in psychopathy. Moreover, choosing to defect elicited more activity in the ACC and DLPFC (areas classically involved in emotional modulation and cognitive control), suggesting that defecting is effortful. Psychopathy, however, is correlated with less activity in these areas: it thus seems easier for psychopathic personalities to be non-cooperative. "Regular" people need more cognitive effort to override their cooperative biases.

fMRI suggest also that low-psychopathy and high-psychopathy subjects differs on how their brain implements cooperative behavior: while the formers rely on emotional biases (strong activation in the OFC, weak activation in DLPFC), the latters rely on cognitive control (weak activation in the OFC, strong activation in DLPFC). High-psychopathy subjects would be, according to Rilling et al., weakly emotionally biased toward defection: they exhibit a stronger OFC activation and a weaker DLPFC for defection. Thus, it seems that normal subjects tend to experiments the immediate gratification of cooperation, independently of the monetary payoff. Psychopaths do not feel the "warm glow" of cooperation, and thus do not cooperate.

Philosophically, there is an interesting lessons here: both low-psychopathy and high-psychopathy subjects follow their own, selfish biases: low-psychopathy ones enjoy cooperating, and high-psychopathy prefer defecting. This is consistent with a thesis I will one day describe more thoroughly another day, the "Invisible Hand of Morality": like markets, morality emerges out of the interaction of selfish agents. Luckily, thanks to evolution, culture, education, norms, etc., normal people selfishness tends to be geared toward cooperation. Psychopaths are not more selfish than normal people: their selfishness do not value cooperation, or other social virtues. Thus morality is not (only) "in the head": it is partly distributed is sensorimotor/somatovisceral mechanisms, cultural habits, external cues, institutions, etc. The other lesson is that morality is multi-realizable: it can be realized through emotional biases or cognitive control.

  • Rilling, J. K., Glenn, A. L., Jairam, M. R., Pagnoni, G., Goldsmith, D. R., Elfenbein, H. A., et al. (2007). Neural correlates of social cooperation and non-cooperation as a function of psychopathy. Biological Psychiatry, 61(11), 1260-1271.
  • Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., & Kilts, C. (2002). A neural basis for social cooperation. Neuron, 35(2), 395-405.



4/11/07

Equality preference and inequality aversions

In a letter to Nature , a group of political scientist and anthropologist report an experiment designed to test equality preference and inequality aversions. the design was simple:

Subjects are divided into groups having four anonymous members each. Each player receives a sum of money randomly generated by a computer. Subjects are shown the payoffs of other group members for that round and are then provided an opportunity to give 'negative' or 'positive' tokens to other players. Each negative token reduces the purchaser's payoff by one monetary unit (MU) and decreases the payoff of a targeted individual by three MUs; positive tokens decrease the purchaser's payoff by one monetary unit (MU) and increase the targeted individual's payoff by three MUs. Groups are randomized after each round to prevent reputation from influencing decisions; interactions between players are strictly anonymous and subjects know this. Also, by allowing participants more than one behavioural alternative, the experiment eliminates possible experimenter demand effects—if subjects were only permitted to punish, they might engage in this behaviour because they believe it is what the experimenters want.
The results support what is often referred to as the "Robin Hood effect": richer individuals were heavily penalized, while poorer received more gift. This would support the hypothesis of Strong Reciprocity (SR), put forth by Fehr, Camerer, Gintins, and many other scholar in behavioral economics. SR implies that individuals will cooperate with cooperator (reciprocal altruism), will not cooperate with cheaters, and are even ready to punish those who cheat others (altruistic punishment):

“people tend to behave prosocially and punish antisocial behavior at cost to themselves, even when the probability of future interactions is low or zero. We call this strong reciprocity." (Gintis, H. (2000). Strong reciprocity and human sociality. Journal of Theoretical Biology, 206(2), p. 177)

And of course, SR implies that individual will be inequity-averse. SR proponent go further, and state that we an innate propensity for altruistic punishment. That’s all well and good, but why so much moral optimism? Couldn't it be that we are selfish agents and that our mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones? We feel good when we punish bad guys, we feel bad when someone make unfair offers to us in the ultimatum game (neuroeconomics studies showed that). I agree with SRers that we are not cold logical egoists, but would favor another approach I call "Hot Logic": (from an abstract of a forthcoming talk):

human agents are selfish agents adapted to trade, exchange and partner selection in biological markets (Noë et al., 2001). Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. This initial hedonism is gradually modulated by social norms, by which agents learn how to maximise their utility given the norms. The ‘hot logic’ approach provide a simpler explanation of cooperation and fairness: subjects make ‘fair’ offers in the ultimatum game because they know their offer would be rejected otherwise. Responders affective reaction to ‘unfair offers’ is in fact a reaction to the loss of an expected monetary gain: they anticipated that the proposer would comply with social norms. This claim is supported by other imaging studies showing that loss of money can be aversive, and that actual and counterfactual utility recruit the same neural resources (Delgado et al., 2006; Montague et al., 2006). This approach explains why subjects make lower offers in the dictator game (an ultimatum game in which the responder make an offer and the responder's role is entirely passive) than in the ultimatum, why, when using a computer displaying eyespots, almost twice as many participants transfer money in the dictator (Haley & Fessler, 2005), and why attractive people are offered more in the ultimatum (Solnick & Schweitzer, 1999). In every case, agents seek to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social (reputation, acceptance, etc.). SR is thus seen as cooperative habits that are not repaid (Burnham & Johnson, 2005)