Natural Rationality | decision-making in the economy of nature
Showing posts with label ultimatum. Show all posts
Showing posts with label ultimatum. Show all posts

6/8/08

Ultimatum, Serotonin and Fairness

A new study links serotonin levels to Ultimatum decision: less serotonin is correlated with a higher rejection of unfair offers, suggesting that it "plays a critical role in regulating emotion during social decision-making"

[From ScienceNOW]

Deal or No Deal?

By Constance Holden
ScienceNOW Daily News
05 June 2008

What if your friend had a large apple pie but gave you only a sliver? Would you throw the piece on the floor in protest? Maybe, depending on your brain chemistry. New research suggests that such emotional decisions can be influenced by a shortage of the neurotransmitter serotonin.

Researchers have linked low levels of serotonin in the brain to various mental states, including depression and impulsive, irrational behavior. A team headed by neuroscience Ph.D. student Molly Crockett of the University of Cambridge in the U.K. wondered whether the neurotransmitter would affect how people play the ultimatum game, an experiment used by economists that shows how people's economic decisions are sometimes irrational.

In the game, a "proposer" is given a sum of money, part of which he or she offers to share with a "responder." If a responder turns down the offer as too low, then neither player gets any money. What the ultimatum game reveals is that even though a responder would always gain by accepting the offered share, he will sometimes cut off his own nose to spite his face, as it were, punishing a proposer by rejecting an unfair offer.

In the current study, the researchers recruited 20 volunteers and asked them to fast the evening before the game. The next morning, some of the volunteers were given a drink chock-full of every amino acid the body needs to make protein, save tryptophan, an amino acid from which serotonin is manufactured. The result, says Crockett, is that the amino acids rush to the brain, "crowding out" any residual tryptophan and creating a temporary shortage of tryptophan and therefore serotonin. Control subjects were given drinks that contained tryptophan.

Both groups then played the ultimatum game as responders. The lack of tryptophan did not affect the subjects' general moods or their perceptions of the fairness of an offer, the team reports online today in Science. It did, however, appear to make people more likely to reject unfair offers. For example, when they knew that they were being offered only 20% of the pot, 82% of the acute tryptophan depletion group rejected the offer over multiple trials, whereas only 67% of the placebo group did.

The research bolsters the view that rejection of an unfair offer is "an emotionally driven impulse," says Crockett. To heed more rational monetary considerations in the face of an unfair offer, she says, requires that you "swallow your pride"--or the sliver of pie--which is a form of emotional control.

The new work is "a significant advance" in understanding the neural mechanisms of how emotions impact decision-making, says neuroscientist Michael Koenigs of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. Psychologist Ernst Fehr of the University of Zürich, Switzerland, cautions, however, that the paper doesn't really address which behavior is rational or irrational. Rejecting low offers, he says, could be the result of a rational calculation about the value of fairness rather than an angry impulse.



1/22/08

How to Play the Ultimatum Game? An Engineering Approach to Metanormativity

A paper I wrote with Paul Thagard has been accepted for publication in Philosophical Psychology:

Abstract. The ultimatum game is a simple bargaining situation where the behavior of people frequently contradicts classical game theory. Thus, the commonly observed behavior should beconsidered irrational. We argue that this putative irrationality stems from a wrong conception of metanormativity (the study of norms about the establishment of norms). After discussing different metanormative conceptions, we defend a Quinean, naturalistic approach to the evaluation of norms. After reviewing empirical literature on the ultimatum game, we argue that the common behavior in the ultimatum game is rational and justified. We therefore suggest that the norms of economic rationality should be amended.



11/29/07

I'm so sad I would accept anything

no I am good, thanks, I just found a new study on the Ultimatum Game by Harlé & Sanfey that show that people in a sad (but not in a happy) mood reject more 'unfair' offers:


Our findings are consistent with our initial hypothesis, namely that sadness may focus the responder's attention on the negative emotional consequences of unfair offers rather than the positive impact of accepting such offers (i.e., monetary reward), thereby prompting lower acceptance rates of unfair offers. In addition, although information processing was not explicitly tested, our findings are consistent with motivational theories on the processing consequences of affect, whereby sadness is likely to promote a more vigilant processing style, reflecting a motivation to enhance the processing of information related to potentially threatening and harmful situations (Forgas, 2003). Such enhanced processing would again make individuals in a sad mood more likely to focus on the threatening aspect of being treated unfairly (in contrast with individuals in neutral or positive moods), thus potentially leading to more rejections of these unfair offers.

Harlé, K. M., & Sanfey, A. G. (2007). Incidental sadness biases social economic decisions in the Ultimatum Game. Emotion, 7(4), 876-81.



11/12/07

Economic neuroendocrinology : 2 new studies

Two interesting papers in press in NeuroEndocrinology Letters:

1. Smokers are fairer in the Ultimatum Game when they have to split cigarettes than when they split money:

  • Takahashi, T. (2007). Economic decision-making in the ultimatum game by smokers. Neuro Endocrinol Lett, 28(5). [pubmed]
2. Social evaluation significantly increases generosity in the dictator game:

Takahashi, T., Ikeda, K., & Hasegawa, T. (2007). Social evaluation-induced amylase elevation and economic decision-making in the dictator game in humans. Neuro Endocrinol Lett, 28(5).[pubmed]



11/7/07

Affect and metacognition in the Ultimatum

Blogging on Peer-Reviewed Research

Andrade and Ho submitted their subjects to the following ultimatum game: Proposers keep 50% or 75% of the "pie" to be divided; responders can change the size of the pie (from 0 to $1). When proposers were told that responders just watched a funny sitcom, they tended to make 'unfair' offers (keep 75%). They were, according to the authors, expecting responders to be in a good mood and thus to be more tolerant to unfairness. There was no such effect when they were told that the receiver watched an anger-inducing movie clip. The effect also disappears when proposers know that receiver know that proposers had this information. In other words, proposers in this setting are emotional strategists and rely on affective metacognition to devise the best move.





10/10/07

Fairness and Schizophrenia in the Ultimatum

For the first time, a study look at schizophrenic patient behavior in the Ultimatum Game. Other studies of schizophrenic choice behavior revealed that they have difficulty in decisions under ambiguity and uncertainty (Lee et al, 2007), have a slight preference for immediate over long-term rewards, (Heerey et al, 2007), exhibit "strategic stiffness" (sticking to a strategy in sequential decision-making without integrating the outcomes of past choices; Kim et al, 2007), perform worse in the Iowa Gambling Task (Sevy et al. 2007)

A research team from Israel run a Ultimatum experiment with schizophrenic subjects (plus two control group, one depressive, one non-clinical). They had to split 20 New Israeli Shekels (NIS) (about 5 US$). Although schizophrenic patients' Responder behavior was not different from control group, their Proposer behavior was different: they tended to be less strategic.

With respect to offer level, results fall into three categories, fair (10 NIS), unfair (less than 10 NIS), and hyper-fair (more than 10 NIS). Schizophrenic patients tended to make less 'unfair' offer, and more 'hyper-fair' offer. Men were more generous than women.

According to the authors,

for schizophrenic Proposers, the possibility of dividing the money evenly was as reasonable as for healthy Proposers, whereas the option of being hyper-fair appears to be as reasonable as being unfair, in contrast to the pattern for healthy Proposers.
Agay et al. also studied the distribution of Proposers types according to their pattern of sequential decisions (how their second offer compared to their first). They identified three types:
  1. "‘Strong-strategic’ Proposers are those who adjusted their 2nd offer according to the response to their 1st offer, that is, raised their 2nd offer after their 1st one was rejected, or lowered their 2nd offer after their 1st offer was accepted.
  2. Weak-strategic’ Proposers are those who perseverated, that is, their 2nd offer was the same as their 1st offer.
  3. Finally, ‘non-strategic’ Proposers are those who unreasonably reduced their offer after a rejection, or raised their offer after an acceptance."
20% of the schizoprenic group are non-strategic, while none of the healthy subjects are non-strategic.


the highest proportion of non-strategic Proposers is in the schizophrenic group
The authors do not offer much explication for these results:

In the present framework, schizophrenic patients seemed to deal with the cognition-emotion conflict described in the fMRI study of Sanfey et al. (2003) [NOTE: the authors of the first neuroeconomics Ultimatum study] in a manner similar to that of healthy controls. However, it is important to note that the low proportion of rejections throughout the whole experiment makes this conclusion questionable.
Another study, however, shows that "siblings of patients with schizophrenia rejected unfair offers more often compared to control participants." (van ’t Wout et al, 2006, chap. 12), thus suggesting that Responder behavior might be, after all, different in patient with a genetic liability to schizophrenia. Yet another unresolved issue !

Related Posts

Reference
  • Agay, N., Kron, S., Carmel, Z., Mendlovic, S., & Levkovitz, Y. Ultimatum bargaining behavior of people affected by schizophrenia. Psychiatry Research, In Press, Corrected Proof.
  • Hamann, J., Cohen, R., Leucht, S., Busch, R., & Kissling, W. (2007). Shared decision making and long-term outcome in schizophrenia treatment. The Journal of clinical psychiatry, 68(7), 992-7.
  • Heerey, E. A., Robinson, B. M., McMahon, R. P., & Gold, J. M. (2007). Delay discounting in schizophrenia. Cognitive neuropsychiatry, 12(3), 213-21.
  • Hyojin Kim, Daeyeol Lee, Shin, Y., & Jeanyung Chey. (2007). Impaired strategic decision making in schizophrenia. Brain Res.
  • Lee, Y., Kim, Y., Seo, E., Park, O., Jeong, S., Kim, S. H., et al. (2007). Dissociation of emotional decision-making from cognitive decision-making in chronic schizophrenia. Psychiatry research, 152(2-3), 113-20.
  • Mascha van ’t Wout, Ahmet Akdeniz, Rene S. Kahn, Andre Aleman. Vulnerability for schizophrenia and goal-directed behavior: the Ultimatum Game in relatives of patients with schizophrenia. (manuscript), from The nature of emotional abnormalities in schizophrenia: Evidence from patients and high-risk individuals / Mascha van 't Wout, 2006, Proefschrift Universiteit Utrecht.
  • McKay, R., Langdon, R., & Coltheart, M. (2007). Jumping to delusions? Paranoia, probabilistic reasoning, and need for closure. Cognitive neuropsychiatry, 12(4), 362-76.
  • Sevy, S., Burdick, K. E., Visweswaraiah, H., Abdelmessih, S., Lukin, M., Yechiam, E., et al. (2007). Iowa Gambling Task in schizophrenia: A review and new data in patients with schizophrenia and co-occurring cannabis use disorders. Schizophrenia Research, 92(1-3), 74-84.



10/5/07

Review Paper on the Ultimatum, Chimps and related stuff

Thanks to Gene Expression, I found a good review paper in The Economist on the Ultimatum, Chimps and related stuff:

Evolution: Patience, fairness and the human condition. The Economist. Retrieved October 5, 2007, from



Ape-onomics: Chimps in the Ultimatum Game and Rationality in the Wild

I recently discussed the experimental study of the Ultimatum Game, and showed that it has been studied in economics, psychology, anthropology, psychophysics and genetics. Now primatologists/evolutionary anthropologists Keith Jensen, Josep Call and Michael Tomasello (the same team that showed that chimpanzees are vengeful but not spiteful see 2007a) had chimpanzees playing the Ultimatum, or more precisely, a mini-ultimatum, where proposers can make only two offers, for instance a fair vs. unfair one, or fair vs. an hyperfair, etc. Chimps had to split grapes. The possibilities were (in x/y pairs, x is the proposer, y, the responder)
  • 8/2 versus 5/5
  • 8/2 versus 2/8
  • 8/2 versus 8/2 (no choice)
  • 8/2 versus 10/0

The experimenters used the following device:



Fig. 1. (from Jensen et al, 2007b) Illustration of the testing environment. The proposer, who makes the first choice, sits to the responder's left. The apparatus, which has two sliding trays connected by a single rope, is outside of the cages. (A) By first sliding a Plexiglas panel (not shown) to access one rope end and by then pulling it, the proposer draws one of the baited trays halfway toward the two subjects. (B) The responder can then pull the attached rod, now within reach, to bring the proposed food tray to the cage mesh so that (C) both subjects can eat from their respective food dishes (clearly separated by a translucent divider)

Results indicate the chimps behave like Homo Economicus:
responders did not reject unfair offers when the proposer had the option of making a fair offer; they accepted almost all nonzero offers; and they reliably rejected only offers of zero (Jensen et al.)


As the authors conclude, "one of humans' closest living relatives behaves according to traditional economic models of self-interest, unlike humans, and t(...) does not share the human sensitivity to fairness."

So Homo Economicus would be a better picture of nature, red in tooth and claw? Yes and no. In another recent paper, Brosnan et al. studied the endowment effect in chimpanzees. The endowment effect is a bias that make us placing a higher value on objects we own relative to objects we do not. Well, chimps do that too. While they usually are indifferent between peanut butter and juice, once they "were given or ‘endowed’ with the peanut butter, almost 80 percent of them chose to keep the peanut butter, rather than exchange it for a juice bar" (from Vanderbilt news). They do not, however, have loss-aversion for non-food goods (rubber-bone dog chew toy and a knotted-rope dog toy). Another related study (Chen et al, 2006) also indicates that capuchin monkeys exhibit loss-aversion.

So there seems to be an incoherence here: chimps are both economically and non-economically rational. But this is only, as the positivists used to say, a pseudo-problem: they tend to comply with standard or 'selfish' economics in social context, but not in individual context. The difference between us and them is truly that we are, by nature, political animals. Our social rationality requires reciprocity, negotiation, exchange, communication, fairness, cooperation, morality, etc., not plain selfishness. Chimps do cooperate and exhibit a slight taste for fairness (see section 1 of this post), but not in human proportions.


Related post:

Reference:



10/4/07

Social Neuroeconomics: A Review by Fehr and Camerer

Ernst Fehr and Colin Camerer, two prominent experimental/behavioral/neuro-economists published a new paper in Trends in Cognitive Science on social neuroeconomics. Discussing many studies (this paper is a state-of-the-art review), they conclude that

social reward activates circuitry that overlaps, to a surprising degree, with circuitry that anticipates and represents other types of rewards. These studies reinforce the idea that social preferences for donating money, rejecting unfair offers, trusting others and punishing those who violate norms, are genuine expressions of preference

The authors illustrate this overlap with a the following picture: social and non-social reward elicit similar neural activation (see references for all cited studies at the end of this post):



Figure 1. (from Fehr and Camerer, forthcoming). Parallelism of rewards for oneself and for others: Brain areas commonly activated in (a) nine studies of social reward (..), and (b) a sample of six studies of learning and anticipated own monetary reward (..).

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents's strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.


Related posts:

Reference:
  • Fehr, E. and Camerer, C.F., Social neuroeconomics: the neural circuitry of social preferences, Trends Cogn. Sci. (2007), doi:10.1016/j.tics.2007.09.002


Studies of social reward cited in Fig. 1:

  • [26] J. Rilling et al., A neural basis for social cooperation, Neuron 35 (2002), pp. 395–405.
  • [27] J.K. Rilling et al., Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways, Neuroreport 15 (2004), pp. 2539–2543.
  • [28] D.J. de Quervain et al., The neural basis of altruistic punishment, Science 305 (2004), pp. 1254–1258.
  • [29] T. Singer et al., Empathic neural responses are modulated by the perceived fairness of others, Nature 439 (2006), pp. 466–469
  • [30] J. Moll et al., Human fronto-mesolimbic networks guide decisions about charitable donation, Proc. Natl. Acad. Sci. U. S. A. 103 (2006), pp. 15623–15628.
  • [31] W.T. Harbaugh et al., Neural responses to taxation and voluntary giving reveal motives for charitable donations, Science 316 (2007), pp. 1622–1625.
  • [32] Tabibnia, G. et al. The sunny side of fairness – preference for fairness activates reward circuitry. Psychol. Sci. (in press).
  • [55] T. Singer et al., Brain responses to the acquired moral status of faces, Neuron 41 (2004), pp. 653–662.
  • [56] B. King-Casas et al., Getting to know you: reputation and trust in a two-person economic exchange, Science 308 (2005), pp. 78–83.

Studies of learning and anticipated own monetary reward cited in Fig. 1:

  • [33] S.M. Tom et al., The neural basis of loss aversion in decision-making under risk, Science 315 (2007), pp. 515–518.
  • [61] M. Bhatt and C.F. Camerer, Self-referential thinking and equilibrium as states of mind in games: fMRI evidence, Games Econ. Behav. 52 (2005), pp. 424–459.
  • [73] P.K. Preuschoff et al., Neural differentiation of expected reward and risk in human subcortical structures, Neuron 51 (2006), pp. 381–390.
  • [74] J. O’Doherty et al., Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science 304 (2004), pp. 452–454.
  • [75] E.M. Tricomi et al., Modulation of caudate activity by action contingency, Neuron 41 (2004), pp. 281–292.



10/2/07

The Ultimatum Game: Economics, Psychology, Anthroplogy, Psychophysics, Neuroscience and now, Genetics

The Ultimatum Game (Güth et al., 1982) is a one-shot bargaining game. A first player (the Proposer) offers a fraction of a money amount; the second player (the Responder) may either accept or reject the proposal. If the Responder accepts, she keeps the offered amount while the Proposer keeps the difference. If she rejects it, both players get nothing. According to Game Theory, the subgame perfect equilibrium (a variety of Nash equilibrium for dynamic game) is the following set of strategies:

  • The Proposer should offer the smallest possible amount (in order to keep as much money as possible)
  • The Responder should accept any amount (because a small amount should be better than nothing)

The UG has been experimentally tested in a variety of contexts, where different parameters of the game where modified: age, sex, the amount of money, the degree of anonymity, the length of the game, etc. (Camerer & Thaler, 1995; Henrich et al., 2004; Oosterbeek et al., 2004; Samuelson, 2005). The results show a robust tendency: the game-theoretic strategy is almost never followed. People tend to anticipate and make “fair” offers. While Proposers offer about 50% of the "pie", Responders tend to accept these offers while rejecting most of the “unfair” offers (less than 20% of M). Some will even reject “too fair” offers (Bahry & Wilson, 2006)

Henrich et al (2005) studied Ultimatum behavior in 15 different small-scale societies. They found cultural variation, but these variations exhibit a constant pattern of reciprocity: differences are greater between groups than between individuals in the same group. Subjects are closer to equilibrium strategies in 4 situations: when playing against a computer, (Blount, 1995; Rilling et al., 2004; Sanfey et al., 2003; van 't Wout et al., 2006) when players are groups (Robert & Carnevale, 1997), autists (Hill & Sally, 2002) or people trained in decision and game theory, like economists and economics students (Carter & Irons, 1991; Frank et al., 1993; Kahneman et al., 1986).

Neuroeconomics shows that Ultimatum decision-making relies mainly on three areas: the anterior insula (AI), often associated with negative emotional states like disgust or anger, the dorsolateral prefrontal cortex (DLPFC), associated with cognitive control, attention, goals maintenance, and the anterior cingulate cortex (ACC), associated with cognitive conflict, motivation, error detection and emotional modulation (Sanfey et al., 2003). All three areas denote a stronger activity when the Responder faces an unfair offer. When offers are unfair, the brain faces a dilemma: punish that unfair player, or get a little money (which is better than nothing) ? The conflict involves firstly AI. This area is more active when unfair offers are proposed, and even more when the Proposer (compared to when it is a computer.) It is also correlated with the degree of unfairness (Sanfey et al., 2003: 1756) and with the decision to reject unfair offers. Skin conductance experiments show that unfair offers and their rejection are associated with greater skin conductance (van 't Wout et al., 2006). DLPFC activity remains relatively constant across unfair offers. When AI activation is greater than DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than AI.

A new study by Wallace, Cesarini, Lichtenstein & Johannesson now investigate the influence of genes on ultimatum behavior. The compared monozigotic (same set of genes) and dizigotic ("non-identical") twins. Their statistical analysis identified which of these different factors explains the variation in behavior: genetic effects, common environmental effects, and nonshared environmental effects. They found that genetics account for 42% of the variation in responder behavior: i.e., identical twins are more likely to behave similarly in their reaction to Ultimatum proposition. Thus, sensibility to fairness might have a genetic component, an idea that proponent of the Strong Reciprocity Hypothesis put forth but did not backed with genetic studies. Yet it does not mean that that fairness preferences followed the evolutionary path advocated by SRH proponents:

Although our results are consistent with an evolutionary origin for fairness preferences, it is important to remember that heritability measures the genetically determined variation around some average behavior. Hence, it does not provide us with any direct evidence with regard to the evolutionary dynamics that brought it about (Wallace et al., p. 15633)
Of course the big question remains : how is it that genes influence the development of neural structure that control fairness preference and reciprocal behavior? A question for evo-devo-neuro-psycho-economics !

Related posts:
Links:
  • Wallace, B., Cesarini, D., Lichtenstein, P., & Johannesson, M. (2007). Heritability of ultimatum game responder behavior. Proceedings of the National Academy of Sciences, 0706642104. [Open Access Article]

  • Genes influence people's choices in economics game (MIT news)
    Except:
    "Compared to common environmental influences such as upbringing, genetic influences appear to be a much more important source of variation in how people play the game," Cesarini said.
    "This raises the intriguing possibility that many of our preferences and personal economic choices are subject to substantial genetic influence," said lead author Bjorn Wallace of the Stockholm School of Economics, who conceived of the study."


    References:

    • Bahry, D. L., & Wilson, R. K. (2006). Confusion or Fairness in the Field? Rejections in the Ultimatum Game under the Strategy Method. Journal of Economic Behavior & Organization, 60(1), 37-54.
    • Blount, S. (1995). When Social Outcomes Aren't Fair: The Effect of Causal Attributions on Preferences. Organizational Behavior and Human Decision Processes, 63(2), 131-144.
    • Camerer, C., & Thaler, R. H. (1995). Anomalies: Ultimatums, Dictators and Manners. The Journal of Economic Perspectives, 9(2), 209-219.
    • Carter, J. R., & Irons, M. D. (1991). Are Economists Different, and If So, Why? The Journal of Economic Perspectives, 5(2), 171-177.
    • Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does Studying Economics Inhibit Cooperation? The Journal of Economic Perspectives, 7(2), 159-171.
    • Güth, W., Schmittberger, R., & Schwarze, B. (1982). An Experimental Analysis of Ultimatum Bargaining. Journal of Economic Behavior and Organization, 3(4), 367-388.
    • Henrich, J., Boyd, R., Bowles, S., Camerer, C., E., F., & Gintis, H. (2004). Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies: Oxford University Press.
    • Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Henrich, N. S., Hill, K., Gil-White, F., Gurven, M., Marlowe, F. W., Patton, J. Q., & Tracer, D. (2005). "Economic Man" In Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies. Behavioral and Brain Sciences, 28(6), 795-815; discussion 815-755.
    • Hill, E., & Sally, D. (2002). Dilemmas and Bargains: Theory of Mind, Cooperation and Fairness. working paper, University College, London.
    • Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1986). Fairness and the Assumptions of Economics. The Journal of Business, 59(4), S285-S300.
    • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
    • Rilling, J. K., Sanfey, A. G., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2004). The Neural Correlates of Theory of Mind within Interpersonal Interactions. Neuroimage, 22(4), 1694-1703.
    • Robert, C., & Carnevale, P. J. (1997). Group Choice in Ultimatum Bargaining. Organizational Behavior and Human Decision Processes, 72(2), 256-279.
    • Samuelson, L. (2005). Economic Theory and Experimental Economics. Journal of Economic Literature, 43, 65-107.
    • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.
    • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective State and Decision-Making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
    • Wallace, B., Cesarini, D., Lichtenstein, P., & Johannesson, M. (2007). Heritability of ultimatum game responder behavior. Proceedings of the National Academy of Sciences, 0706642104.



9/22/07

Strong reciprocity, altruism and egoism

Proponents of the Strong Reciprocity Hypothesis (i.e., Bowles, Gintis, Boyd, Fehr, Heinrich, etc., I will call them “The Collective”) claim that human being are strong reciprocators: they are willing to sacrifice resources in order to reward fair and punish unfair behavior even if there is no direct or future reward. Thus we are, according to the Collective, innately endowed with pro-social preferences and aversion to inequity. Those who advocate strong reciprocity take it to be a a ‘genuine’ altruistic force, not explained by other motives. Strong reciprocity is here contrasted with weaker form of reciprocity, such as: cooperating with someone because of genetic relatedness (kinship), because one follows a tit-for-tat pattern (direct reciprocity), wants to establish a good reputation (indirect reciprocity) or displays signs of power or wealth (coslty signaling). Thus our species is made, ceteris paribus, of altruistic individuals that tend to cooperate with cooperators and punish defectors, even at a cost. Behavioral economics showed how people are willing to cooperate in games such as the prisoner’s dilemma, the ultimatum game or the trust game: they do not cheat in the first one, offer fair split in the second and transfer money in the third.

Could it be possible, however, that this so-called altruism is instrumental? I don’t think it is always, some cases require closer scrutiny. For instance, in the Ultimatum Game, there is a perfectly rational and egoist reason to make a fair offer, such as a 50-50 split: it is the best—from one’s point of view—solution to the trade-off between making a profit and proposing a split that the other player will accept: if you propose more, you loose more money: if you propose less, you risk a rejection. In non-market integrated culture where a 20-80 split is not seen as unfair, proposers routinely offer such splits, because they know it will be accepted.

It can also be instrumental in a more basic sense, for instance in participating to the propagation of our genes. For instance, (Madsen et al., 2007) showed that individuals behave more altruistically toward their own kin when there is a significant genuine cost (such as pain), an attitude also mirrored in study with questionnaires (Stewart-Williams, 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Finally, other studies showed that facial resemblances enhance trust (DeBruine, 2002). In each cases, we see a mechanisms whose function is to negotiate our investments in relationships in order to promote the copies of our genes housed in people who are, or look like, or could help us expand our kin. For instance, by simply viewing lingerie or picture of sexy women, men behave more fairly in the ultimatum game (Van den Bergh & Dewitte, 2006).

Many of these so-called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton et al., 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al., 1994). According to the present framework, it would be because there is no advantage in being fair.

When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson et al., 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks(Bering et al., 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff & Norenzayan, in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban et al., 2007) showed that with a dozen participants, punishment expenditure tripled. In the trust game, players apply learned social rules and trust-building routines, but they hate when cheater enjoy what they themselves refrain from enjoying. Thus it feels good to reset the equilibrium. Again, appareant altruism is instrumental in personal satisfaction, at least in some occasions.

Hardy & Van Vugt, in their theory of competitive altruism suggest that

individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable (Hardy & Van Vugt, 2006)

Maybe agents are attempting to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social. A possible alternative approach is what I call ‘methodological hedonism’: let’s assume, at least for identifying cognitive mechanisms, that the brain, when in function normally, tries to maximize hedonic feelings, even in moral behavior. We use feelings to anticipate feelings in order to control our behavior toward a maximization of positive feelings and a minimization of negative ones. The ‘hot logic’ of emotions is more realist than the cold logic of traditional game theory but still preserve the idea of utility maximization (although “value” would be more appropriate). In this framework, altruistic behavior is possible, but need not to rely on altruistic cognition. Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. The initial hedonism is gradually modulated by social norms, by which agents learn how to maximize their utility given the norms. Luckily, however, biological and cultural evolution favored patterns of self-interest that promote social order to a certain extent: institutions, social norms, routines and cultures tend to structure morally our behavior. Thus understanding morality may amount to understand how individual’s egoism is modulated by social processes. There might be no need to posit an innate Strong Reciprocity. Or at least it is worth to explore other avenues!


Related posts



Suggested reading:


Update:

I forgot to mention a thorough presentation and excellent criticism of Strong Reciprocity:

important papers from the Collective are:
  • Bowles, S., & Gintis, H. (2004). The Evolution of Strong Reciprocity: Cooperation in Heterogeneous Populations. Theoretical Population Biology, 65(1), 17-28.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong Reciprocity, Human Cooperation, and the Enforcement of Social Norms. Human Nature, 13(1), 1-25.
  • Fehr, E., & Rockenbach, B. (2004). Human Altruism: Economic, Neural, and Evolutionary Perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.


References


  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Bering, J. M., McLeod, K., & Shackelford, T. K. (2005). Reasoning About Dead Agents Reveals Possible Adaptive Trends. Human Nature, 16(4), 360-381.
  • Bolton, G. E., Katok, E., & Zwick, R. (1998). Dictator Game Giving: Rules of Fairness Versus Acts of Kindness International Journal of Game Theory, 27 269-299
  • DeBruine, L. M. (2002). Facial Resemblance Enhances Trust. Proc Biol Sci, 269(1498), 1307-1312.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hardy, C. L., & Van Vugt, M. (2006). Nice Guys Finish First: The Competitive Altruism Hypothesis. Pers Soc Psychol Bull, 32(10), 1402-1413.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Kurzban, R., DeScioli, P., & O'Brien, E. (2007). Audience Effects on Moralistic Punishment. Evolution and Human Behavior, 28(2), 75-84.
  • Madsen, E. A., Tunney, R. J., Fieldman, G., Plotkin, H. C., Dunbar, R. I. M., Richardson, J.-M., & McFarland, D. (2007). Kinship and Altruism: A Cross-Cultural Experimental Study. British Journal of Psychology, 98, 339-359.
  • Shariff, A. F., & Norenzayan, A. (in press). God Is Watching You: Supernatural Agent Concepts Increase Prosocial Behavior in an Anonymous Economic Game. Psychological Science.
  • Stewart-Williams, S. (2007). Altruism among Kin Vs. Nonkin: Effects of Cost of Help and Reciprocal Exchange. Evolution and Human Behavior, 28(3), 193-198.
  • Van den Bergh, B., & Dewitte, S. (2006). Digit Ratio (2d:4d) Moderates the Impact of Sexual Cues on Men's Decisions in Ultimatum Games. Proc Biol Sci, 273(1597), 2091-2095.




9/7/07

A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.





7/25/07

More than Trust: Oxytocin Increases Generosity

It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo).
They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.


Interestingly, OT subjects did not turn into pure altruist: they make offers (mean $3.77) in the Dictator game similar to placebo subjects (mean $3.58, no significant difference). Thus the motive is neither direct nor indirect reciprocity (Ultimatum were blinded one-shot so there is no tit-for-tat or reputation involved here). It is not pure altruism, according to Stanton et al., (or "strong reciprocity"--see this post on the distinction between types of reciprocity) because the threat of the MinAccept compels players to make fair offers. They conclude that generosity in enhanced because OT affects empathy. Subjects simulate the perspective of the other player in the Ultimatum, but not in the Dictator. Hence, generosity "runs" on empathy: in empathizing context (Ultimatum) subjects are more generous, but in non-empathizing context they don't--in the dictator, it is not necessary to know the opponent's strategy in order to compute the optimal move, since her actions has no impact on the proposer's behavior. It would be interesting to see if there is a different OT effect in basic vs. reenactive empathy (sensorimotor vs. deliberative empathy; see this post).

Interested readers should also read Neural Substrates of Decision-Making in Economic Games, by one of the author of the study (Stanton): in her PhD Thesis, she desribes many neurpeconomic experiences.

[Anecdote: I once asked people of the ESA why they call their society like that: all presented papers were experimental, so I thought that the name should reflect the empirical nature of the conference. They replied judiscioulsy : "Because we think that it's how economics should be done"...]

References



7/18/07

Altruism: a research program

Phoebe: I just found a selfless good deed; I went to the park and let a bee sting me.
Joey
: How is that a good deed?

Phoebe
:
Because now the bee gets to look tough in front of his bee friends. The bee is happy and I am not.
Joey:
Now you know the bee probably died when he stung you?
Phoebe:
Dammit!
- [From
Friends, episode 101]
Altruism is a lively research topic. The evolutionary foundations, neural substrates, psychological mechanisms, behavioral manifestations, formal modeling and philosophical analyses of cooperation constitute a coherent—although not unified—field of inquiry. See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in Penner et al. 2005; Hauser 2006; Fehr and Fischbacher 2002; Fehr and Fischbacher 2003. The nature of prosocial behavior, from kin selection to animal cooperation to human morality can be considered as a progressive Lakatosian research programs. Altruism has a great conceptual "sex-appeal" because it is mystery for two types of theoreticians: biologists and economists. They both wonder why an animal or an economic agent would help another: since these agents maximize fitness/utility, altruistic behavior is suboptimal. Altruims (help, trust, fairness, etc.) seems intuitively incoherent with economic rationality and biological adaptation, with markets and natural selection. Or is it?

In the 60's, biologists challenged the idea that natural selection is incompatible with altruism. Hamilton (1964a, 1964b) and Trivers (1971) showed that biological altruism makes sense. An animal X might behave altruistically toward another Y because they are genetically related: in doing so, X maximize the copying of its gene, since many of its genes will be hosted in Y. Thus the more X and Y are genetically related, the more X will be ready to help Y. This is kin altruism. Altruism can also be reciprocal: scratch my back and I'll scratch yours. Tit-for-tat, or reciprocal altruism also makes sense because by being altruistic, one may augments its payoff. X helps Y, but the next time Y will help X; thus it is better to help than not to help. In both cases, the idea is that altruism is a mean not an end. Others argue that more complex types of altruisms exists. For instance, X can help Y because Y already helped Z (indirect reciprocity). In this case, the tit-for-tat logic is extended to agents that the helper did not meet in the past. Generalized reciprocity (see this previous post) is another type of altruism: helping someone because someone helped you in the past. This altruism does not require memory or personal identification. X helps someone because someone else helped X. Finally, Strong reciprocity is the idea that humans display genuine altruism: strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Their proponents argue that it evolved through group selection.

Experimental economics and neuroeconomics also challenged the idea of rational, greedy, selfish actor (the Ayn Rand hero). Experimental game theory showed that, contrarily to orthodox game theory, subjects cooperate massively in prisoner’s dilemma (Ledyard, 1995; Sally, 1995). Rilling et al. showed that players enjoy cooperating. Players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas such as the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex (Rilling et al., 2002). In another experiment, the presentation of faces of intentional cooperators caused increased activity in reward-related areas (Singer et al. 2004). In the ultimatum game, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula (associated with negative emotional states like disgust or anger) is more active when unfair offers are proposed (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003). Subjects experiment this affective reaction to unfairness only when the proposer is a human being: the activation is significantly lower when the proposer is a computer. Moreover, the anterior insula activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers (Sanfey et al., 2003: 1756). Fehr and Fischbacher (2002) suggested that economic agents are inequity-averse and have prosocial preferences. Thus they modified the utility functions to account for behavioral (and now neural) data. In Moral Markets: The Critical Role of Values in the Economy, Paul Zak proposes a radically different conception of morality in economics:

The research reported in this book revealed that most economic exchange, whether with a stranger or a known individual, relies on character values such as honesty, trust, reliability, and fairness. Such values, we argue, arise in the normal course of human interactions, without overt enforcement—lawyers, judges or the
police are present in a paucity of economic transactions (...). Markets are moral in two senses. Moral behavior is necessary for exchange in moderately regulated markets, for example, to reduce cheating without exorbitant
transactions costs. In addition, market exchange itself can lead to an understanding of fair-play that can build social capital in nonmarket settings. (Zak, forthcoming)

See how this claim is similar to :

The two fundamental principles of evolution are mutation and natural selection. But evolution is constructive because of cooperation. New levels of organization evolve when the competing units on the lower level begin to cooperate. Cooperation allows specialization and thereby promotes biological diversity. Cooperation is the secret behind the open-endedness of the evolutionary process. Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection.
(Nowak, 2006)

Hence, biological and economic theorizing followed a similar path: they started first with the assumption that agents value only their own payoff; evidence suggested then that agents behave altruistically and, finally, theoretical models were amended and now incorporate different kinds of reciprocity.

So is it good news? Are we genuinely altruistic? First a precision: there is a difference between biological and psychological altruism, and the former does not entail the latter; biological altruism is about fitness consequencences (survival and reproduction), while psychological altruism is about motivation and intentions:

Where human behaviour is concerned, the distinction between biological altruism, defined in terms of fitness consequences, and ‘real’ altruism, defined in terms of the agent's conscious intentions to help others, does make sense. (Sometimes the label ‘psychological altruism’ is used instead of ‘real’ altruism.) What is the relationship between these two concepts? They appear to be independent in both directions (...). An action performed with the conscious intention of helping another human being may not affect their biological fitness at all, so would not count as altruistic in the biological sense. Conversely, an action undertaken for purely self-interested reasons, i.e. without the conscious intention of helping another, may boost their biological fitness tremendously (Biological Altruism, Stanford Encyclopedia of Philosophy; see also a forthcoming paper by Stephen Stich and the classic Sober & Wilson 1998).

The interesting question, for many researchers, is then: what is the link between biological and psychological altruism? A common view suggests non-human animals are biological altruists, while humans are also psychological atruists. I would like argue against this sharp divide and briefly suggest three things:
  1. Non-humans also display psychological altruism
  2. Human altruism is strongly influenced by biological motives
  3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

1. Non-humans also display psychological altruism


A discussed in a previous post, a recent research paper showed that rats exhibit generalized reciprocity: rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Although the authors of the paper take a more prudent stance, I consider generalized reciprocity as psychological altruism (remember, it can be both): rats cooperate because they "feel good", and that feeling is induced by cooperation, not by a particular agent. Hence their brain value cooperation (probably thanks to hormonal mechanisms similar to ours) in itself, even if there is no direct tit-for-tat. In the same edition of PLoS biology, primatologist Frans de Waal (2007) also argue that animals show signs of psychological altruism; it it particularly clear in an experiment (Warneken et al, again, in the same journal) that show that chimpanzees are ready to help unknown humans and conspecifics (hence ruling out kin and tit-for-tat altruism), even at a cost to themselves. Here is the description of the experiments:

In the first experiment, the chimpanzee saw a person unsuccessfully reach through the bars for a stick on the other side, too far away for the person, but within reach of the ape. The chimpanzees spontaneously helped the reaching person regardless of whether this yielded a reward, or not. A similar experiment with 18-month-old children gave exactly the same outcome. Obviously, both apes and young children are willing to help, especially when they see someone struggling to reach a goal. The second experiment increased the cost of helping. The chimpanzees were still willing to help, however, even though now they had to climb up a couple of meters, and the children still helped even after obstacles had been put in their way. Rewards had been eliminated altogether this time, but this hardly seemed to matter. One could, of course, argue that chimpanzees living in a sanctuary help humans because they depend on them for food and shelter. How familiar they are with the person in question may be secondary if they simply have learned to be nice to the bipedal species that takes care of them. The third and final experiment therefore tested the apes' willingness to help each other, which, from an evolutionary perspective, is also the only situation that matters. The set-up was slightly more complex. One chimpanzee, the Observer, would watch another, its Partner, try to enter a closed room with food. The only way for the Partner to enter this room would be if a chain blocking the door were removed. This chain was beyond the Partner's control—only the Observer could untie it. Admittedly, the outcome of this particular experiment surprised even me—and I am probably the biggest believer in primate empathy and altruism. I would not have been sure what to predict given that all of the food would go to the Partner, thus creating potential envy in the Observer. Yet, the results were unequivocal: Observers removed the peg holding the chain, thus yielding their Partner access to the room with food (de Waal)
(image from Warneken et al video)

2. Human altruism is strongly influenced by biological motives

In many cases, human altruism appear as a complex version of biological altruism (see Burnham & Johnson, 2005. The Biological and Evolutionary Logic of Human Cooperation for a review). For instance, Madsen et al. (2007) showed that humans behave more altruistically toward their own kin when there is a significant genuine cost (such as muscular pain), an attitude also mirrored in study with questionnaires (Stewart-Williams 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Other studies showed that facial similarity enhances trust (DeBruine 2002). In each cases, there is a mechanism whose function is to negotiate personal investments in relationships in order to promote the copying of genes housed either in people of—or people who seems to be of—our kin.

Many of these so called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton, Katok, and Zwick 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al. 1994). When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley and Fessler 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson, Nettle, and Roberts 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks (Bering, McLeod, and Shackelford 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff and Norenzayan in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban, DeScioli, and O'Brien 2007) showed that with a dozen participants, punishment expenditure tripled. Again, appareant altruism is instrumental in personal satisfaction. Other research suggest that altruism is also an advantage in sexual selection: "people preferentially direct cooperative behavior towards more attractive members of the opposite sex. Furthermore, cooperative behavior increases the perceived attractiveness of the cooperator" (Farrelly et al., 2007).

An interesting framework to understand altruims is Hardy (no relation with me) & Van Vugt (2006) theory of competitive altruism: "individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable." We need, however, a more general perspective.


3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

All organic beings are striving to seize on each place in the economy of nature - (Darwin, [1859] 2003, p. 90)

With Darwin, natural economy began to be understood with the conceptual tools of political economy. The division of labor, competition (“struggle” in Darwin’s words), trading, cost, the accumulation of innovations, the emergence of complex order from unintentional individual actions, the scarcity of resources and the geometric growth of populations are ideas borrowed from Adam Smith, Thomas Malthus, David Hume and other founders of modern economics. Thus, the economy of nature ceased to be an abstract representation of the universe and became a depiction of the complex web of interactions between biological individuals, species and their environment—the subject matter of ecology. Consequently, Darwin’s main contributions are his transforming biology into a historical science—like geology—and into an economic science.

I take the economy-of-nature principle to be a refinement of the natural selection principle: while it describes general features of the biosphere, it puts emphasis on the intersection between individual biographies and natural selection, and especially on decision-making. On the one hand, the decisions biological individuals make increase or decrease their fitness, and thus good decision-makers are more likely to propagate their genes. On the other hand, natural selection is likely to favor good decision-makers and to get rid of bad decision-makers. Thus, if our best descriptive theories of animal and human economic behavior indicate that all these agents have prosocial preferences and make altruistic decisions, then these preferences and decisions are not maladaptive and irrational. They must have an evolutionary and an economic payoff. Markets and natural selections requires cooperation, even if the deep motivations are partly selfish. Fairness, equity and honesty are social goods in the economy of nature, human and non-human.


  • Bateson, M., D. Nettle, and G. Roberts. 2006. Cues of being watched enhance cooperation in a real-world setting. Biology Letters 12:412-414.
  • Bering, J. M., K. McLeod, and T. K. Shackelford. 2005. Reasoning about Dead Agents Reveals Possible Adaptive Trends. Human Nature 16 (4):360-381.
  • Bolton, G. E., E. Katok, and R. Zwick. 1998. Dictator Game Giving: Rules of Fairness versus Acts of Kindness International Journal of Game Theory 27 269-299
  • Burnham, T. C., and D. D. P. Johnson. 2005. The Biological and Evolutionary Logic of Human Cooperation. Analyse & Kritik 27:113-135.
  • DeBruine, L. M. 2002. Facial resemblance enhances trust. Proc Biol Sci 269 (1498):1307-12.
  • de Waal FBM (2007) With a Little Help from a Friend. PLoS Biol 5(7): e190 doi:10.1371/journal.pbio.0050190
  • Farrelly, D., J. Lazarus, and G. Roberts. 2007. Altruists attract. Evolutionary Psychology 5 (2):313-329.
  • Fehr, E., and U. Fischbacher. 2002. Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal 112:C1-C33.
  • Fehr, Ernst, and Urs Fischbacher. 2003. The nature of human altruism. Nature 425 (6960):785-791.
  • Hamilton, W. D. 1964a. The genetical evolution of social behaviour. I. J Theor Biol 7 (1):1-16.
  • ———. 1964b. The genetical evolution of social behaviour. II. J Theor Biol 7 (1):17-52.
  • Hauser, Marc D. 2006. Moral minds : how nature designed our universal sense of right and wrong. New York: Ecco.
  • Ledyard, J. O. 1995. Public goods: A survey of experimental research. In Handbook of experimental economics, edited by J. H. Kagel and A. E. Roth: Princeton University Press.
  • Haley, K., and D. Fessler. 2005. Nobody’s watching? Subtle cues affect generosity in an anonymous economic game. Evolution and Human Behavior 26 (3):245-56.
  • Hoffman, E., K. Mc Cabe, K. Shachat, and V. Smith. 1994. Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior 7:346–380.
  • Kurzban, Robert, Peter DeScioli, and Erin O'Brien. 2007. Audience effects on moralistic punishment. Evolution and Human Behavior 28 (2):75-84.
  • Madsen, Elainie A., Richard J. Tunney, George Fieldman, Henry C. Plotkin, Robin I. M. Dunbar, Jean-Marie Richardson, and David McFarland. 2007. Kinship and altruism: A cross-cultural experimental study. British Journal of Psychology 98:339-359.
  • Penner, Louis A., John F. Dovidio, Jane A. Piliavin, and David A. Schroeder. 2005. Prosocial behavior: Multilevel Perspectives. Annual Review of Psychology 56 (1):365-392.
  • Okasha, Samir, "Biological Altruism", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2005/entries/altruism-biological/.
  • Stewart-Williams, Steve. 2007. Altruism among kin vs. nonkin: effects of cost of help and reciprocal exchange. Evolution and Human Behavior 28 (3):193-198.
  • Nowak, M. A. 2006. Five Rules for the Evolution of Cooperation. Science 314 (5805):1560-1563.
  • Oosterbeek, H., Randolph S., and G. van de Kuilen. 2004. Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7:171-188.
  • Rilling, J., D. Gutman, T. Zeh, G. Pagnoni, G. Berns, and C. Kilts. 2002. A neural basis for social cooperation. Neuron 35 (2):395-405.
  • Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196
  • Sally, D. 1995. Conversations and cooperation in social dilemmas: a meta-analysis of experiments from 1958 to 1992. Rationality and Society 7:58 – 92
  • Sanfey, A. G., J. K. Rilling, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. 2003. The neural basis of economic decision-making in the Ultimatum Game. Science 300 (5626):1755-8.
  • Shariff, A.F. , and A. Norenzayan. in press. God is watching you: Supernatural agent concepts increase prosocial behavior in an anonymous economic game. Psychological Science.
  • Singer, T., S. J. Kiebel, J. S. Winston, R. J. Dolan, and C. D. Frith. 2004. Brain responses to the acquired moral status of faces. Neuron 41 (4):653-62.
  • Sober, Elliott, and David Sloan Wilson. 1998. Unto others : the evolution and psychology of unselfish behavior. Cambridge, Mass.: Harvard University Press.
  • Stich, S. (forthcoming). Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson's Argument for Psychological Altruism, to appear in Biology and Philosophy.
  • Trivers, R. L. 1971. The Evolution of Reciprocal Altruism. Quarterly Review of Biology 46 (1):35.
  • Warneken F, Hare B, Melis AP, Hanus D, Tomasello M (2007) Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biol 5(7): e184 doi:10.1371/journal.pbio.0050184
  • Zak, P. J., ed. forthcoming. Moral Markets: The Critical Role of Values in the Economy. Princeton, N.J.: Princeton University Press.