Natural Rationality | decision-making in the economy of nature
Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts

2/20/08

The Psychology of Moral Reasoning

In the last issue of Judgment and Decision-Making (Volume 3, Number 2, February 2008),

Monica Bucciarelli, Sangeet Khemlani, and P. N. Johnson-Laird (well-known for his research on mental models in reasoning) present an account of the psychology of moral reasoning (pdf - html). It is based on many experiments and gives a large place to deontic reasoning (often neglected in current moral psychology) and constrasts sharply with many sentimentalist accounts of morality (according to which moral judgment is mainly emotional). Their main findings are :

  1. Indefinability of moral propositions: No simple criterion exists to tell from a proposition alone whether or not it concerns morals as opposed to some other deontic matter, such as a convention, a game, or good manners.
  2. Independent systems: Emotions and deontic evaluations are based on independent systems operating in parallel.
  3. Deontic reasoning: all deontic evaluations, including those concerning morality, depend on inferences, either unconscious intuitions or conscious reasoning.
  4. Moral inconsistency: the beliefs that are the basis of moral intuitions and conscious moral reasoning are neither complete nor consistent.



2/6/08

A few papers on decision, the brain, and morality

A few papers worth reading:

A study on meta-ethics beleifs (how people see ethical claims):

Important for anyone interested in neuroscience: to a non-expert public, seeing a picture of a brain biases people to give more credibility to a piece of information:

Social cognition: it begins with goal recognition

Jesse Prinz's new book (in a nutshell: morality is emotional and relative to a culture)

Forthcoming in the new journal Neuroethics:

A study on Chimpanzee barter behavior:

  • Brosnan, S. F., Grady, M. F., Lambeth, S. P., Schapiro, S. J., & Beran, M. J. (2008). Chimpanzee autarky, PLoS ONE, 3(1), e1518.




1/14/08

Pinker in the NYT

The Moral Instinct


Which of the following people would you say is the most admirable: Mother Teresa, Bill Gates or Norman Borlaug? And which do you think is the least admirable? For most people, it’s an easy question. Mother Teresa, famous for ministering to the poor in Calcutta, has been beatified by the Vatican, awarded the Nobel Peace Prize and ranked in an American poll as the most admired person of the 20th century. Bill Gates, infamous for giving us the Microsoft dancing paper clip and the blue screen of death, has been decapitated in effigy in “I Hate Gates” Web sites and hit with a pie in the face. As for Norman Borlaug . . . who the heck is Norman Borlaug?

Yet a deeper look might lead you to rethink your answers. Borlaug, father of the “Green Revolution” that used agricultural science to reduce world hunger, has been credited with saving a billion lives, more than anyone else in history. Gates, in deciding what to do with his fortune, crunched the numbers and determined that he could alleviate the most misery by fighting everyday scourges in the developing world like malaria, diarrhea and parasites. Mother Teresa, for her part, extolled the virtue of suffering and ran her well-financed missions accordingly: their sick patrons were offered plenty of prayer but harsh conditions, few analgesics and dangerously primitive medical care(...)


The Moral Instinct



11/20/07

A Temptative Definition of Morality

Usually, we recognize moral agents as social beings who tend to engage in behavior whose consequences may benefit others and who comply with rules that promote such consequences. Rescuing a drowning child counts as a morally good act, while firing a female employee because of her pregnancy counts as a morally wrong act. It is right/wrong not just because is has certain consequences, but also because it is coherent/incoherent with certain norms of goodness and wrongness that moral agents are expected to follow. For instance, we consider rape immoral not only because of its harmful consequences, but also because it profoundly violates the autonomy and personal rights of an individual and the “general moral prohibition against using other persons against their wills” (Goldman, 1977, p. 281). This normativity can be produced either by intuitive reactions or deliberate judgments. We may have visceral feelings about the goodness and wrongness of certain acts, even if cannot tell why we find it good or bad (a phenomena known as “moral dumbfounding” (Haidt & Hersh, 2001)). For others, more complex and less consensual questions, (e.g. euthanasia) we may have to rely to principles, doctrines, beliefs, codes or virtues. In every case, morality is a set of dispositions to make normatively assessable decisions and judgments (either intuitively or intellectually) about appropriate social behavior and values. This appropriateness, as Haidt showed, revolves usually around five important moral themes: harm and care, fairness and reciprocity, loyalty, authority and respect, purity and sanctity, each of which determines virtues and vices: kindness/cruelty, honesty/dishonesty, self-sacrifice/treason, obedience/disobedience, temperance/intemperance (Haidt, 2007).

Morality thus encompasses most, if not all, appropriateness standard. All human societies tend to approve of certain behaviors and promote moral codes through cultural, religious or legal means. For instance, most westerners do not see anything wrong with a widow eating fish; in certain places in India, however, this is considered as an immoral act (Shweder et al., 1997). A universal feature of human cognition is thus the moral attitude: “people expect others to act in certain ways and not in others, and they care about whether or not others are following these norms” (Haidt, in press). Thus, although the content of norms and the scope of the moral domains is, to a certain extent, culturally variable, the deontic attitudes of people around the world is a constant. So instead of a crisp definition of morality, maybe we need another kind of conceptual representation.

To represent the moral domain, I suggest that we imagine a two-dimensional space: a deontic dimension and a social dimension. The first one represents the different deontic attitude we can have about conduct: forbidden, permitted, obligatory; the other represent the number of agents that are the objects of the moral judgment, i.e., 1 (personal), 2 or more (interpersonal) or a great number (collective). Any moral judgment is a statement about the deontic status (ordinate) of an action (abscissae). But, you might reply, a statement like “you should not cross if the light is red” is deontic and social, yet it is not really moral? Well, I would say that it is. The whole society is, as Durkheim said, “une oeuvre morale”. We interact with each others from the “moral stance”; whether it is crossing the street or killing someone, each act has a moral (deontic x social) status.



However, it is clear that no definition of morality will be enough. As Nado et al. (to appear) observe, in every attempt to define morality or even to assert what such a definition would be (such as the essays in Wallace & Walker, 1970), no consensus was ever reached. So I don’t expect a consensus here, but hope that it could be a useful approach.

References.
  • Goldman, A. H. (1977). Plain Sex. Philosophy and Public Affairs, 6(3), 267-287.
  • Haidt, J. (2007). The New Synthesis in Moral Psychology. Science, 316(5827), 998-1002.
  • Haidt, J., & Hersh, M. A. (2001). Sexual Morality: The Cultures and Emotions of Conservatives and Liberals1. Journal of Applied Social Psychology, 31(1), 191-221.
  • Haidt, J., & Joseph, C. (in press). The Moral Mind: How 5 Sets of Innate Moral Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules. In P. Carruthers, S. Laurence & S. Stich (Eds.), The Innate Mind, Vol. 3.
  • Nado, J., Kelly, D., & Stich, S. (to appear). Moral Judgment. In J. Symons & P. Calvo (Eds.), Routledge Companion to the Philosophy of Psychology.
  • Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The" Big Three" Of Morality (Autonomy, Community, Divinity) and The" Big Three" Explanations of Suffering. In A. Brandt & P. Rozin (Eds.), Morality and Health (pp. 119-169). New York: Routledge.
  • Wallace, G., & Walker, A. D. M. (1970). The Definition of Morality. London,: Methuen.



11/2/07

Evolution, cooperation and kinds of altruism

[Another clarification attempt; as ususal, comments welcome!]


Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection (Nowak, 2006, p. 1563)


In a first approximation, a cooperative behavior i) benefits the recipient and ii) is beneficent or costly to the actor. Thus cooperation has two components: altruism (costly,) and mutual benefits (beneficent).

Following Sober and Wilson (1998), one may distinguishes evolutionary (or biological) altruism from psychological altruism. Psychological altruism is a psychological motivation that take another agent’s well-being (or utility) as a ultimate ends, i.e., the other agent’s well-being is “is desired for its own sake, rather than because the agent thinks that satisfying the desire will lead to the satisfaction of some other desire” (Stich, 2007, p. 268). Evolutionary altruism refers to behavior by which an organism engages in a costly behavior that benefits another organism, the cost and benefits being evaluated in terms of fitness consequences. Biologists, since Darwin, wondered why an individual would invest time and resources to help another: “He who was ready to sacrifice his life, […] rather than betray his comrades, would often leave no offspring to inherit his noble nature” (Darwin, 1871/2000, p. 130). They came up with two types of explanation: cooperation has either direct or indirect fitness consequences.

As demonstrated by Grafen, population genetics entails that natural selection favors individuals that maximize their fitness (Grafen, 1999, 2002, 2006). It does not mean that biological agents are optimal or perfect, but rather that they optimize their fitness: they tend to behave in such a way that their genes get replicated. This tendency is statistical, not teleological: on average, they do better than chance. For at least three situations (There are others, but I will discuss only the most salient in the literature), gene propagation and fitness optimization may be facilitated by others organisms and requires cooperation.


Fig. 1, based on West et al, 2007.

Cooperation can have direct or indirect fitness benefit, i.e., cooperating can contribute to one’s own survival and reproduction (direct) or it can contrbute to genetically related organisms’ survival and reproduction (indirect). Since relatives share genes with the actor, helping them is a way to maximize indirect fitness (Hamilton, 1964a, 1964b). Cooperative individuals can also maximize direct fitness if they reciprocate in repeated encounters (Trivers, 1971). A’s helping B is fitness-enhancing if A can expect B to help him in the future (direct reciprocity, or “tit-for-tat” altruism (Axelrod, 1984)). Indirect reciprocity brings a third individual: A helps B even if A never encountered B in the past because helping B contribute to building a good reputation and may result in being helped by another individual C (Nowak & Sigmund, 2005). Cooperative individuals are thus more likely to be helped (see Fig.2).


From Nowak (2006)

Direct reciprocity can be compared to “a barter economy based on the immediate exchange of goods, whereas indirect reciprocity resembles the invention of money. The money that fuels the engines of indirect reciprocity is reputation. (Nowak, 2006, p. 1561)”. Indirect reciprocity also explains evolutionary altruism as costly signaling (Zahavi & Zahavi, 1997). Evolutionarily speaking, the peacock’s tail is not the most useful body part: it makes movement difficult and is far from being discreet. However, this handicap is a costly signal since it is, for peacocks, a sign of fitness: offspring of peacocks with elaborate tails “grow and survive better under nearly natural conditions”(Petrie, 1994, p. 598). Hence the tail becomes a hard-to-fake signal directed at female peacocks. A behavior can be a costly signal if it is easily observable, costly to the actor, reliably associated with of some desirable characteristic (resources, power, skills, etc.) and lead to some evolutionary advantage such as mates, food, etc. (Smith & Bird, 2000). Costly signal theory can thus explains altruism as a behavioral costly signal: in helping unknown and unrelated individuals (when it is a perceptible and perceptibly costly behavior), the altruist individuals are in fact building social capital by hard-to-fake signals (Smith & Bird, 2004; Zahavi, 2000).

Evolutionary and psychological altruism contrast sharply. The former is a fitness-maximizing behavioral pattern whereby individuals cooperate because it promotes gene propagation and it was selected for its fitness-enhancing consequences. Biological organisms cooperate in order to maximize their inclusive fitness (the conjunction of direct and indirect fitness). Evolutionary altruisms can be found in microbes and plants as well as in birds and human (although indirect reciprocity seems to be uniquely human). Psychological altruism, most probably a capacity reserved to higher mammals or primates, is a motivation that has nothing to do with fitness (Sober & Wilson, 1998). A behavior can be evolutionary altruistic without being psychologically altruistic and vice-versa (vervet monkeys predator alarm calls). It is nonetheless possible that a psychologically altruistic behavior has fitness-enhancing consequences or that an evolutionary altruistic behavior be motivated by psychological altruism (child care for instance). One might possibly resist using the term altruism for kin discrimination, direct and indirect reciprocity, since it seems that helping another individuals so as to maximize your inclusive fitness does not sound like altruistic at all. Remember Haldane remarks, that he would give his life to save two brothers or eight cousins, since the shared genetic material is identical (quoted in McElreath & Boyd, 2007, p. 82). It seems that in the end, any kind of altruistic behavior turns out to be un-altruistic:

To extend Haldane’s famous remark, kinship can explain rescuing drowning people if they are relatives (…); reciprocal altruism if they return the favor (…); indirect reciprocity if a third party returns the favor (…) and signaling if the rescuer is judged more attractive (Farrelly et al., 2007, p. 314)

Yet if we reserve altruism solely for psychological (‘real’) altruism, it is impossible to look for an evolutionary account of altruism. “If by ‘real’ altruism we mean altruism done with the conscious intention to help, then the vast majority of living creatures are not capable of ‘real’ altruism nor therefore of ‘real’ selfishness either” (Okasha, 2005, §4).

Although psychological and evolutionary altruism are well-defined concepts they leave aside important characteristics of altruism. Take the costs and benefits, for instance. Psychological altruism, construed as a motivation, has no explicit cost or benefits. Evolutionary altruism has cost and benefits: copies of genes in the gene pool. But how can we measure whether organism X helping organism Y increases the number of X’s (or Y’s) offspring? Of course, evolutionary theory is “population thinking” (Mayr, 1959), and do not deal with single individuals in isolation. Yet, behavioral ecology (the study of animal behavior), psychology, and experimental economics do deal with individuals and need to quantify altruistic behavior. Behavioral ecologists, remark White and Crawford, “almost always ignore the number of offspring produced and study, instead, how a particular adaptation contributes to some fitness proxy, for example, net energy intake rate” (White et al., 2007, p. 276). Most of the research on cooperation deals with fitness proxies such as money, food, status: when someone donates to humanitarian organizations, it is possible to quantify how much money is donated, but not—or more difficulty—how this act increases fitness through reputation (indirect reciprocity). Similarly, cooperating in a repeated prisoner’s dilemma is direct-reciprocal, but it is not clear how it promotes genes propagation; it could be fitness-enhancing, but this is not how experimental game theory measures cooperation. Hence between evolutionary and psychological altruism I suggest we add another type of altruism: economic altruism. A behavior is economically altruistic if it benefits the recipient and it is costly to the actor; the cost and benefits are not fitness consequences, but commodities or resources (food, money, information etc.). An economically altruistic behavior could be fitness-enhancing, but need not to; it could be motivated by psychological altruism, but need not to. Economic altruism can therefore be ‘pure’ (disinterested, in which case it overlap with psychological altruism) or ‘impure” when it is motivated by a warm-glow feeling (Andreoni, 1990).

Economic altruism is therefore another category of altruistic behavior, irreducible to—but not completely independent of—psychological and biological altruism. It is the proximate, immediate, visible face of altruism and cooperation (and deeper, of morality), while evolutionary altruism is ultimate and psychological altruism is not directly observable (although neural imaging technology are now making it observable yet imprecise). Therefore, when experimental economists study how much money a subject is ready share a lab experiment, they study economic altruism (e.g. Guth & van Damme, 1998); when social psychologists study readiness to help, they study psychological altruism (e.g. Batson, 1991); and when biologists study kin recognition and nepotism in social animals, they study evolutionary altruism (e.g. Silk, 2002).


References
  • Andreoni, J. (1990). Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow Giving. The Economic Journal, 100(401), 464-477.
  • Axelrod, R. M. (1984). The Evolution of Cooperation. New York: Basic Books.
  • Batson, C. D. (1991). The Altruism Question : Toward a Social Psychological Answer. Hillsdale, N.J.: L. Erlbaum, Associates.
  • Darwin, C. (1871/2000). The Descent of Man, and Selection in Relation to Sex: Adamant Media.
  • Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press.
  • Farrelly, D., Lazarus, J., & Roberts, G. (2007). Altruists Attract. Evolutionary Psychology, 5(2), 313-329.
  • Grafen, A. (1999). Formal Darwinism, the Individual-as-Maximising-Agent Analogy, and Bet-Hedging. Proc. Roy. Soc. Ser. B, 266, 799–803.
  • Grafen, A. (2002). A First Formal Link between the Price Equation and an Optimization Program. Journal of Theoretical Biology, 217(1), 75.
  • Grafen, A. (2006). Optimization of Inclusive Fitness. J Theor Biol, 238(3), 541-563.
  • Guth, W., & van Damme, E. (1998). Information, Strategic Behavior, and Fairness in Ultimatum Bargaining: An Experimental Study. J Math Psychol, 42(2/3), 227-247.
  • Hamilton, W. D. (1964a). The Genetical Evolution of Social Behaviour. I. Journal of Theoretical Biology, 7(1), 1-16.
  • Hamilton, W. D. (1964b). The Genetical Evolution of Social Behaviour. Ii. Journal of Theoretical Biology, 7(1), 17-52.
  • Mayr, E. (1959). Darwin and the Evolutionary Theory in Biology. In Evolution and Anthropology: A Centennial Appraisal (pp. 409–412). Washington, D.C.: Anthropological Society of Washington.
  • McElreath, R., & Boyd, R. (2007). Mathematical Models of Social Evolution : A Guide for the Perplexed. Chicago ; London: University of Chicago Press.
  • Nowak, M. A. (2006). Five Rules for the Evolution of Cooperation. Science, 314(5805), 1560-1563.
  • Nowak, M. A., & Sigmund, K. (2005). Evolution of Indirect Reciprocity. Nature, 437(7063), 1291-1298.
  • Okasha, S. (2005). Biological Altruism. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/sum2005/entries/altruism-biological.
  • Petrie, M. (1994). Improved Growth and Survival of Offspring of Peacocks with More Elaborate Trains. Nature, 371(6498), 598-599.
  • Silk, J. B. (2002). Kin Selection in Primate Groups. International Journal of Primatology, 23(4), 849-875.
  • Smith, E. A., & Bird, R. B. (2004). Costly Signaling and Cooperative Behavior. In H. Gintis, S. Bowles, R. Boyd & E. Ferh (Eds.), Moral Sentiments and Material Interests : The Foundations of Cooperation in Economic Life (pp. 115-148). Cambridge, Mass.: MIT Press.
  • Smith, E. A., & Bird, R. L. B. (2000). Turtle Hunting and Tombstone Opening: Public Generosity as Costly Signaling. Evolution and Human Behavior, 21(4), 245-261.
  • Sober, E., & Wilson, D. S. (1998). Unto Others : The Evolution and Psychology of Unselfish Behavior. Cambridge, Mass.: Harvard University Press.
  • Stich, S. (2007). Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson’s Argument for Psychological Altruism. Biology and Philosophy, 22(2), 267-281.
  • Trivers, R. L. (1971). The Evolution of Reciprocal Altruism. Quarterly Review of Biology, 46(1), 35.
  • West, S. A., Griffin, A. S., & Gardner, A. (2007). Social Semantics: Altruism, Cooperation, Mutualism, Strong Reciprocity and Group Selection. Journal of Evolutionary Biology, 20(2), 415-432.
  • White, D. W., Dill, L. M., & Crawford, C. B. (2007). A Common, Conceptual Framework for Behavioral Ecology and Evolutionary Psychology. Evolutionary Psychology,, 5(2), 275-288.
  • Zahavi, A. (2000). Altruism: The Unrecognized Selfish Traits. Journal of Consciousness Studies, 7, 253-256.
  • Zahavi, A., & Zahavi, A. (1997). The Handicap Principle : A Missing Piece of Darwin's Puzzle. New York: Oxford University Press.



10/24/07

Stich on Morality and Cognition

Stephen Stich, one of the most experimentally-oriented philosophers of these days, recently gave a series of talk in Paris, entitled " Moral Theory Meets Cognitive Science: How the Cognitive Science Can Transform Traditional Debates" You can watch the videos of 4 talks online:



9/22/07

Strong reciprocity, altruism and egoism

Proponents of the Strong Reciprocity Hypothesis (i.e., Bowles, Gintis, Boyd, Fehr, Heinrich, etc., I will call them “The Collective”) claim that human being are strong reciprocators: they are willing to sacrifice resources in order to reward fair and punish unfair behavior even if there is no direct or future reward. Thus we are, according to the Collective, innately endowed with pro-social preferences and aversion to inequity. Those who advocate strong reciprocity take it to be a a ‘genuine’ altruistic force, not explained by other motives. Strong reciprocity is here contrasted with weaker form of reciprocity, such as: cooperating with someone because of genetic relatedness (kinship), because one follows a tit-for-tat pattern (direct reciprocity), wants to establish a good reputation (indirect reciprocity) or displays signs of power or wealth (coslty signaling). Thus our species is made, ceteris paribus, of altruistic individuals that tend to cooperate with cooperators and punish defectors, even at a cost. Behavioral economics showed how people are willing to cooperate in games such as the prisoner’s dilemma, the ultimatum game or the trust game: they do not cheat in the first one, offer fair split in the second and transfer money in the third.

Could it be possible, however, that this so-called altruism is instrumental? I don’t think it is always, some cases require closer scrutiny. For instance, in the Ultimatum Game, there is a perfectly rational and egoist reason to make a fair offer, such as a 50-50 split: it is the best—from one’s point of view—solution to the trade-off between making a profit and proposing a split that the other player will accept: if you propose more, you loose more money: if you propose less, you risk a rejection. In non-market integrated culture where a 20-80 split is not seen as unfair, proposers routinely offer such splits, because they know it will be accepted.

It can also be instrumental in a more basic sense, for instance in participating to the propagation of our genes. For instance, (Madsen et al., 2007) showed that individuals behave more altruistically toward their own kin when there is a significant genuine cost (such as pain), an attitude also mirrored in study with questionnaires (Stewart-Williams, 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Finally, other studies showed that facial resemblances enhance trust (DeBruine, 2002). In each cases, we see a mechanisms whose function is to negotiate our investments in relationships in order to promote the copies of our genes housed in people who are, or look like, or could help us expand our kin. For instance, by simply viewing lingerie or picture of sexy women, men behave more fairly in the ultimatum game (Van den Bergh & Dewitte, 2006).

Many of these so-called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton et al., 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al., 1994). According to the present framework, it would be because there is no advantage in being fair.

When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson et al., 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks(Bering et al., 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff & Norenzayan, in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban et al., 2007) showed that with a dozen participants, punishment expenditure tripled. In the trust game, players apply learned social rules and trust-building routines, but they hate when cheater enjoy what they themselves refrain from enjoying. Thus it feels good to reset the equilibrium. Again, appareant altruism is instrumental in personal satisfaction, at least in some occasions.

Hardy & Van Vugt, in their theory of competitive altruism suggest that

individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable (Hardy & Van Vugt, 2006)

Maybe agents are attempting to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social. A possible alternative approach is what I call ‘methodological hedonism’: let’s assume, at least for identifying cognitive mechanisms, that the brain, when in function normally, tries to maximize hedonic feelings, even in moral behavior. We use feelings to anticipate feelings in order to control our behavior toward a maximization of positive feelings and a minimization of negative ones. The ‘hot logic’ of emotions is more realist than the cold logic of traditional game theory but still preserve the idea of utility maximization (although “value” would be more appropriate). In this framework, altruistic behavior is possible, but need not to rely on altruistic cognition. Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. The initial hedonism is gradually modulated by social norms, by which agents learn how to maximize their utility given the norms. Luckily, however, biological and cultural evolution favored patterns of self-interest that promote social order to a certain extent: institutions, social norms, routines and cultures tend to structure morally our behavior. Thus understanding morality may amount to understand how individual’s egoism is modulated by social processes. There might be no need to posit an innate Strong Reciprocity. Or at least it is worth to explore other avenues!


Related posts



Suggested reading:


Update:

I forgot to mention a thorough presentation and excellent criticism of Strong Reciprocity:

important papers from the Collective are:
  • Bowles, S., & Gintis, H. (2004). The Evolution of Strong Reciprocity: Cooperation in Heterogeneous Populations. Theoretical Population Biology, 65(1), 17-28.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong Reciprocity, Human Cooperation, and the Enforcement of Social Norms. Human Nature, 13(1), 1-25.
  • Fehr, E., & Rockenbach, B. (2004). Human Altruism: Economic, Neural, and Evolutionary Perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.


References


  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Bering, J. M., McLeod, K., & Shackelford, T. K. (2005). Reasoning About Dead Agents Reveals Possible Adaptive Trends. Human Nature, 16(4), 360-381.
  • Bolton, G. E., Katok, E., & Zwick, R. (1998). Dictator Game Giving: Rules of Fairness Versus Acts of Kindness International Journal of Game Theory, 27 269-299
  • DeBruine, L. M. (2002). Facial Resemblance Enhances Trust. Proc Biol Sci, 269(1498), 1307-1312.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hardy, C. L., & Van Vugt, M. (2006). Nice Guys Finish First: The Competitive Altruism Hypothesis. Pers Soc Psychol Bull, 32(10), 1402-1413.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Kurzban, R., DeScioli, P., & O'Brien, E. (2007). Audience Effects on Moralistic Punishment. Evolution and Human Behavior, 28(2), 75-84.
  • Madsen, E. A., Tunney, R. J., Fieldman, G., Plotkin, H. C., Dunbar, R. I. M., Richardson, J.-M., & McFarland, D. (2007). Kinship and Altruism: A Cross-Cultural Experimental Study. British Journal of Psychology, 98, 339-359.
  • Shariff, A. F., & Norenzayan, A. (in press). God Is Watching You: Supernatural Agent Concepts Increase Prosocial Behavior in an Anonymous Economic Game. Psychological Science.
  • Stewart-Williams, S. (2007). Altruism among Kin Vs. Nonkin: Effects of Cost of Help and Reciprocal Exchange. Evolution and Human Behavior, 28(3), 193-198.
  • Van den Bergh, B., & Dewitte, S. (2006). Digit Ratio (2d:4d) Moderates the Impact of Sexual Cues on Men's Decisions in Ultimatum Games. Proc Biol Sci, 273(1597), 2091-2095.




8/3/07

Kahneman and Sunstein on moral psychology and institutions

What does a professor of law and political Science and a Nobel-prize winning economic psychologist/behavioral economist can write about? The interplay of cognition and institution, of course ! In a paper posted on the SSRN website, Kahneman and Sunstein discuss how the combination of dual-process theories of cognition (the idea that we have a fast and intuitive "System I" and a deliberative "System II") and research on moral intuitions can help understanding institutional decision-making:

Moral intuitions operate in much the same way as other intuitions do; what makes the moral domain is distinctive is its foundations in the emotions, beliefs, and response tendencies that define indignation. The intuitive system of cognition, System I, is typically responsible for indignation; the more reflective system, System II, may or may not provide an override. Moral dumbfounding and moral numbness are often a product of moral intuitions that people are unable to justify. An understanding of indignation helps to explain the operation of the many phenomena of interest to law and politics: the outrage heuristic, the centrality of harm, the role of reference states, moral framing, and the act-omission distinction. Because of the operation of indignation, it is extremely difficult for people to achieve coherence in their moral intuitions. Legal and political institutions usually aspire to be deliberative, and to pay close attention to System II; but even in deliberative institutions, System I can make some compelling demands.
[found thanks to The Brooks blog]

On dual-process theories of reasoning, see Stanovich , Keith E. and West, Richard F. (2000) Individual Differences in Reasoning: Implications for the Rationality Debate?.(BBS online archive).


References:
  • Kahneman, Daniel and Sunstein, Cass R., "Indignation: Psychology, Politics, Law" (July 2007). U of Chicago Law & Economics, Olin Working Paper No. 346 Available at SSRN: http://ssrn.com/abstract=1002707



7/31/07

Understanding two models of fairness: outcome-based inequity aversion vs. intention-based reciprocity

Why are people fair? Theoretical economics provides two generic models that fits the data. According to the first, inequity aversion, people are inequity-averse: they don't like a situation where one agent is disadvantaged over another. This model is based on consequences. The other model is based on intentions: although the consequences of an action are important, what matters here is the intention that motivates the action. I won't discuss which approach is better (it is an ongoing debates in economics), but I just wanted to share with a extremely clear presentations of the two parties, found in van Winden, F. (2007). Affect and Fairness in Economics. Social Justice Research, 20(1), 35-52., on pages 38-39:

In inequity aversion models (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999), which focus on the outcomes or payoffs of social interactions, any deviation between an individual's payoff and the equitable payoff (e.g., the mean payoff or the opponent's payoff) is supposed to be negatively valued by that individual. More formally, the crucial difference between an outcome-based inequity aversion model and the homo economicus model is that, in addition to the argument representing the individual's own payoff, a new argument is inserted in the utility function showing the individual's inequity aversion (social preferences), as in the social utility model (see, e.g., Handgraaf et al., 2003; Loewenstein et al., 1989; Messick and Sentis, 1985). The individual is then assumed to maximize this adapted utility function.

In intention-based reciprocity models it is not the outcomes of the interaction as such that matter, but the intentions of the players (Rabin, 1993; see also Falk and Fischbacher, 2006). The idea is that people want to reciprocate perceived (un)kindness with (un)kindness, because this increases their utility. Obviously, beliefs play a crucial role here. More formally, in this case, in addition to an individual's own payoff a new argument is inserted in the utility function incorporating the assumed reciprocity motive. As a consequence, if someone is perceived as being kind it increases the individual's utility to reciprocate with being kind to this other person. Similarly, if the other is believed to be unkind, the individual is better off by being unkind as well, because this adds to her or his utility. Again, this adapted utility function is assumed to be maximized by the individual.





7/27/07

The moral stance: a brief introduction to the Knobe effect and similar phenomena

An important discovery in the new field of Experimental Philosophy (or "x-phi", i.e., "using experimental methods to figure out what people really think about particular hypothetical cases" -Online dictionary of philosophy of mind) is the importance of moral beliefs in intentional action attribution. Contrast these two cases:

[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
A and B are identical, except that in one case the program harms the environment and in the other case it helps it. Subjects were asked whether the chairman of the board intentionally harm (A) or help (B) the environment. Since the two cases have the same belief-desire structure, both actions should be seen as intentional, whether it is right or wrong. It turns out that in the "harm" version, most people (82%) say that the chairman intentionally harm the environment; in the "help" version, only 23% say that the chairman intentionally help the environment. This effect is call the "Knobe effect", because it was discovered by philosopher Joshua Knobe. In a nutshell it means that

people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".

A related post on the x-phi blog by Sven Nyholm describes a similar experiment. The focus was not intention, but happiness. The two versions were

[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Subjects were asked whether they agreed or disagreed with the sentence "Richard is happy" (on a scale from 1=disagree to 7=agree). Subjects slightly agrees (4.6/7) in the morally good condition (A) whereas they slightly disagrees (3.5/7) in the morally bad condition (B), and the difference is statistically significant. Again, the concept of "being happy" is partly moral-normative.

A related phenomena has been observed in an experimental study of generosity recently published: generous behavior is also influenced by moral-normative beliefs (Fong, 2007). In this experiment, donors had to decide how much of a $10 dollars "pie" they want to transfer to a real-life welfare recipient (and keep the rest: thus it is a Dictator game). They read information about the recipients (who had to fill a questionnaire before). They we asked about their age, race, gender, etc. The three recipients had a similar profile, except for their motivation to work. In the last three questions:

  • If you don't work full-time, are you looking for more work? ______Yes, I am looking for more work. ______No, I am not looking for more work.
  • If it were up to you, would you like to work full-time? ______Yes, I would like to work full-time. ______No, I would not like to work full-time.
  • During the last five years, have you held one job for more than a one-year period? Yes_____ No_____
one replied Yes to all ("industrious"), one replied No ("lazy"), and the other did not reply ("low-information"). Donors made their decision and money was transferred for real (btw, that's one thing I like about experimental economics: there is no deceptions, no as-if: real people receive real money). Results:

Lazy-recipient, low-information-recipient, and industrious-recipient received an average of $1.84, $3.21, and $2.79, respectively. The ant and the grasshopper! (" You sang! I'm at ease/
For it's plain at a glance/Now, ma'am, you must dance."). As the author says:

Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers

[In another research paper Pr. Fong also found different biases in giving to Katrina victims.]

An interesting--and surprising--finding of this study is also that this "ant effect" ("you should deserve help") was stronger in people who scored higher of humanitarianism beliefs. They don't give more than others when recipients are deemed to be poor because of laziness (another reason not to trust what people say about themselves, and look at their behavior instead). Again, a strong moral-normative effect on beliefs and behavior. Since oxytocin increases generosity (see this post) and this effect is due to a greater empathy induced by oxytocin, I am curious to see if people in the lazy vs. industrious experiment after oxytocin inhalation would become more sensible to the origin of poverty (bad luck or lazyness). If bad luck inspires more empathy, then I guess yes.

Man the moral animal?

Morality seems to be deeply entrenched in our social-cognitive mechanims. One way to understand all these results is to posit that we routinely and usually interpret each other from the "moral stance", not the intentional one. The "intentional stance", as every Philosophy of mind 101 course teaches us, is the perspective we adopt when we deal with intentional agents (agents who entertain beliefs and desires). We explain and predict action based on their rationality and the mental representations they should have, given the circumstances. In other words, it's the basic toolkit for being a game-theoretic agent. Philosophers (Dennett in particular) contrast this stance with the physical and the design stance (when we talk about an apple that falls or the function of the "Ctrl" key in a computer, for instance). I think we should introduce a related stance, the moral stance. Maybe--but research will tell us--this stance is more basic. We switch to the purely intentional stance when, for instance, we interact with computer in experimental games. Remember how subjects don't care about being cheated by a computer in the Ultimatum Game: they have no aversive feeling (i.e. insular activation) when the computer make an unfair offer (see a discussion in this paper by Sardjevéladzé and Machery). Hence they don't use the "moral stance", but they still use the intentional stance. Another possibility is that the moral stance might explains why people deviate from standard game-theoretical predictions: all these predictions are based on intentional-stance functionalism. This stance applies more to animals, psychopaths or machines than to normal human beings. And also to groups: in many games, such as the Ultimatum of the Centipede, groups behave more "rationally" than individuals (see Bornstein et al., 2004; Bornstein & Yaniv, 1998; Cox & Hayne, 2006), that is, they are closer to game-theoretic behavior (a point radically develop in the movie The Corporation: firms lack moral qualities). Hence the moral stance may have particular requirements (individuality, emotions, empathy, etc.).


References:
Links:



7/25/07

More than Trust: Oxytocin Increases Generosity

It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo).
They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.


Interestingly, OT subjects did not turn into pure altruist: they make offers (mean $3.77) in the Dictator game similar to placebo subjects (mean $3.58, no significant difference). Thus the motive is neither direct nor indirect reciprocity (Ultimatum were blinded one-shot so there is no tit-for-tat or reputation involved here). It is not pure altruism, according to Stanton et al., (or "strong reciprocity"--see this post on the distinction between types of reciprocity) because the threat of the MinAccept compels players to make fair offers. They conclude that generosity in enhanced because OT affects empathy. Subjects simulate the perspective of the other player in the Ultimatum, but not in the Dictator. Hence, generosity "runs" on empathy: in empathizing context (Ultimatum) subjects are more generous, but in non-empathizing context they don't--in the dictator, it is not necessary to know the opponent's strategy in order to compute the optimal move, since her actions has no impact on the proposer's behavior. It would be interesting to see if there is a different OT effect in basic vs. reenactive empathy (sensorimotor vs. deliberative empathy; see this post).

Interested readers should also read Neural Substrates of Decision-Making in Economic Games, by one of the author of the study (Stanton): in her PhD Thesis, she desribes many neurpeconomic experiences.

[Anecdote: I once asked people of the ESA why they call their society like that: all presented papers were experimental, so I thought that the name should reflect the empirical nature of the conference. They replied judiscioulsy : "Because we think that it's how economics should be done"...]

References



7/23/07

The selective impairment of prosocial sentiments and the moral brain


Philosophers often describes the history of philosophy as a dispute between Plato (read: idealism/rationalism) and Aristotle (read:materialism/empiricism). It is of course extremely reductionist since many conceptual and empirical issues where not addressed in Ancient Greece, but there is a non-trivial interpretation of the history of thought according to which controversies often involves these two positions. In moral philosophy and moral psychology, however, the big figures are Hume and Kant. Is morality based on passions (Hume) or reasons (Kant)? This is another simplification, but again it frames the debate. In the last issue of Trends in Cognitive Science(TICS), three papers discusses the reason/emotions debate but provides more acute models.

Recently (see this previous post), Koenig and other collaborators (2007b) explored the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning and showed that they tend to rely a little more on a 'utilitarian' scheme (cost/benefit), and less on a deontological scheme (moral do's and don'ts ), thus suggesting that emotions are involved in moral deontological judgement. These patients, however, were also more emotional in the Ultimatum game, and rejected more offers than normal subjects. So are they emotional or not? In the first TICS paper, Moll and de Oliveira-Souza review the Koenig et al. (2007a) experiment and argue that neither somatic markers nor dual-process theory explains these findings. They propose that a selective impairment of prosocial sentiments explains why the same patient are both less emotional in moral dilemma but more emotional in economic bargaining. These patients can feel less compassion but still feel anger. In a second paper, Greene (author of the research on the trolley problems, see his homepage) challenge this interpretation and put forward his dual-process view (reason-emotion interaction). Moll and de Oliveira-Souza reply in the third paper. As you can see, there is still a debate between Kant and Hume, but cognitive neuroscience provides new tools for both sides of the debates, and maybe even a blurring of these opposites.


References



7/18/07

Altruism: a research program

Phoebe: I just found a selfless good deed; I went to the park and let a bee sting me.
Joey
: How is that a good deed?

Phoebe
:
Because now the bee gets to look tough in front of his bee friends. The bee is happy and I am not.
Joey:
Now you know the bee probably died when he stung you?
Phoebe:
Dammit!
- [From
Friends, episode 101]
Altruism is a lively research topic. The evolutionary foundations, neural substrates, psychological mechanisms, behavioral manifestations, formal modeling and philosophical analyses of cooperation constitute a coherent—although not unified—field of inquiry. See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in Penner et al. 2005; Hauser 2006; Fehr and Fischbacher 2002; Fehr and Fischbacher 2003. The nature of prosocial behavior, from kin selection to animal cooperation to human morality can be considered as a progressive Lakatosian research programs. Altruism has a great conceptual "sex-appeal" because it is mystery for two types of theoreticians: biologists and economists. They both wonder why an animal or an economic agent would help another: since these agents maximize fitness/utility, altruistic behavior is suboptimal. Altruims (help, trust, fairness, etc.) seems intuitively incoherent with economic rationality and biological adaptation, with markets and natural selection. Or is it?

In the 60's, biologists challenged the idea that natural selection is incompatible with altruism. Hamilton (1964a, 1964b) and Trivers (1971) showed that biological altruism makes sense. An animal X might behave altruistically toward another Y because they are genetically related: in doing so, X maximize the copying of its gene, since many of its genes will be hosted in Y. Thus the more X and Y are genetically related, the more X will be ready to help Y. This is kin altruism. Altruism can also be reciprocal: scratch my back and I'll scratch yours. Tit-for-tat, or reciprocal altruism also makes sense because by being altruistic, one may augments its payoff. X helps Y, but the next time Y will help X; thus it is better to help than not to help. In both cases, the idea is that altruism is a mean not an end. Others argue that more complex types of altruisms exists. For instance, X can help Y because Y already helped Z (indirect reciprocity). In this case, the tit-for-tat logic is extended to agents that the helper did not meet in the past. Generalized reciprocity (see this previous post) is another type of altruism: helping someone because someone helped you in the past. This altruism does not require memory or personal identification. X helps someone because someone else helped X. Finally, Strong reciprocity is the idea that humans display genuine altruism: strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Their proponents argue that it evolved through group selection.

Experimental economics and neuroeconomics also challenged the idea of rational, greedy, selfish actor (the Ayn Rand hero). Experimental game theory showed that, contrarily to orthodox game theory, subjects cooperate massively in prisoner’s dilemma (Ledyard, 1995; Sally, 1995). Rilling et al. showed that players enjoy cooperating. Players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas such as the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex (Rilling et al., 2002). In another experiment, the presentation of faces of intentional cooperators caused increased activity in reward-related areas (Singer et al. 2004). In the ultimatum game, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula (associated with negative emotional states like disgust or anger) is more active when unfair offers are proposed (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003). Subjects experiment this affective reaction to unfairness only when the proposer is a human being: the activation is significantly lower when the proposer is a computer. Moreover, the anterior insula activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers (Sanfey et al., 2003: 1756). Fehr and Fischbacher (2002) suggested that economic agents are inequity-averse and have prosocial preferences. Thus they modified the utility functions to account for behavioral (and now neural) data. In Moral Markets: The Critical Role of Values in the Economy, Paul Zak proposes a radically different conception of morality in economics:

The research reported in this book revealed that most economic exchange, whether with a stranger or a known individual, relies on character values such as honesty, trust, reliability, and fairness. Such values, we argue, arise in the normal course of human interactions, without overt enforcement—lawyers, judges or the
police are present in a paucity of economic transactions (...). Markets are moral in two senses. Moral behavior is necessary for exchange in moderately regulated markets, for example, to reduce cheating without exorbitant
transactions costs. In addition, market exchange itself can lead to an understanding of fair-play that can build social capital in nonmarket settings. (Zak, forthcoming)

See how this claim is similar to :

The two fundamental principles of evolution are mutation and natural selection. But evolution is constructive because of cooperation. New levels of organization evolve when the competing units on the lower level begin to cooperate. Cooperation allows specialization and thereby promotes biological diversity. Cooperation is the secret behind the open-endedness of the evolutionary process. Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection.
(Nowak, 2006)

Hence, biological and economic theorizing followed a similar path: they started first with the assumption that agents value only their own payoff; evidence suggested then that agents behave altruistically and, finally, theoretical models were amended and now incorporate different kinds of reciprocity.

So is it good news? Are we genuinely altruistic? First a precision: there is a difference between biological and psychological altruism, and the former does not entail the latter; biological altruism is about fitness consequencences (survival and reproduction), while psychological altruism is about motivation and intentions:

Where human behaviour is concerned, the distinction between biological altruism, defined in terms of fitness consequences, and ‘real’ altruism, defined in terms of the agent's conscious intentions to help others, does make sense. (Sometimes the label ‘psychological altruism’ is used instead of ‘real’ altruism.) What is the relationship between these two concepts? They appear to be independent in both directions (...). An action performed with the conscious intention of helping another human being may not affect their biological fitness at all, so would not count as altruistic in the biological sense. Conversely, an action undertaken for purely self-interested reasons, i.e. without the conscious intention of helping another, may boost their biological fitness tremendously (Biological Altruism, Stanford Encyclopedia of Philosophy; see also a forthcoming paper by Stephen Stich and the classic Sober & Wilson 1998).

The interesting question, for many researchers, is then: what is the link between biological and psychological altruism? A common view suggests non-human animals are biological altruists, while humans are also psychological atruists. I would like argue against this sharp divide and briefly suggest three things:
  1. Non-humans also display psychological altruism
  2. Human altruism is strongly influenced by biological motives
  3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

1. Non-humans also display psychological altruism


A discussed in a previous post, a recent research paper showed that rats exhibit generalized reciprocity: rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Although the authors of the paper take a more prudent stance, I consider generalized reciprocity as psychological altruism (remember, it can be both): rats cooperate because they "feel good", and that feeling is induced by cooperation, not by a particular agent. Hence their brain value cooperation (probably thanks to hormonal mechanisms similar to ours) in itself, even if there is no direct tit-for-tat. In the same edition of PLoS biology, primatologist Frans de Waal (2007) also argue that animals show signs of psychological altruism; it it particularly clear in an experiment (Warneken et al, again, in the same journal) that show that chimpanzees are ready to help unknown humans and conspecifics (hence ruling out kin and tit-for-tat altruism), even at a cost to themselves. Here is the description of the experiments:

In the first experiment, the chimpanzee saw a person unsuccessfully reach through the bars for a stick on the other side, too far away for the person, but within reach of the ape. The chimpanzees spontaneously helped the reaching person regardless of whether this yielded a reward, or not. A similar experiment with 18-month-old children gave exactly the same outcome. Obviously, both apes and young children are willing to help, especially when they see someone struggling to reach a goal. The second experiment increased the cost of helping. The chimpanzees were still willing to help, however, even though now they had to climb up a couple of meters, and the children still helped even after obstacles had been put in their way. Rewards had been eliminated altogether this time, but this hardly seemed to matter. One could, of course, argue that chimpanzees living in a sanctuary help humans because they depend on them for food and shelter. How familiar they are with the person in question may be secondary if they simply have learned to be nice to the bipedal species that takes care of them. The third and final experiment therefore tested the apes' willingness to help each other, which, from an evolutionary perspective, is also the only situation that matters. The set-up was slightly more complex. One chimpanzee, the Observer, would watch another, its Partner, try to enter a closed room with food. The only way for the Partner to enter this room would be if a chain blocking the door were removed. This chain was beyond the Partner's control—only the Observer could untie it. Admittedly, the outcome of this particular experiment surprised even me—and I am probably the biggest believer in primate empathy and altruism. I would not have been sure what to predict given that all of the food would go to the Partner, thus creating potential envy in the Observer. Yet, the results were unequivocal: Observers removed the peg holding the chain, thus yielding their Partner access to the room with food (de Waal)
(image from Warneken et al video)

2. Human altruism is strongly influenced by biological motives

In many cases, human altruism appear as a complex version of biological altruism (see Burnham & Johnson, 2005. The Biological and Evolutionary Logic of Human Cooperation for a review). For instance, Madsen et al. (2007) showed that humans behave more altruistically toward their own kin when there is a significant genuine cost (such as muscular pain), an attitude also mirrored in study with questionnaires (Stewart-Williams 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Other studies showed that facial similarity enhances trust (DeBruine 2002). In each cases, there is a mechanism whose function is to negotiate personal investments in relationships in order to promote the copying of genes housed either in people of—or people who seems to be of—our kin.

Many of these so called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton, Katok, and Zwick 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al. 1994). When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley and Fessler 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson, Nettle, and Roberts 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks (Bering, McLeod, and Shackelford 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff and Norenzayan in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban, DeScioli, and O'Brien 2007) showed that with a dozen participants, punishment expenditure tripled. Again, appareant altruism is instrumental in personal satisfaction. Other research suggest that altruism is also an advantage in sexual selection: "people preferentially direct cooperative behavior towards more attractive members of the opposite sex. Furthermore, cooperative behavior increases the perceived attractiveness of the cooperator" (Farrelly et al., 2007).

An interesting framework to understand altruims is Hardy (no relation with me) & Van Vugt (2006) theory of competitive altruism: "individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable." We need, however, a more general perspective.


3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

All organic beings are striving to seize on each place in the economy of nature - (Darwin, [1859] 2003, p. 90)

With Darwin, natural economy began to be understood with the conceptual tools of political economy. The division of labor, competition (“struggle” in Darwin’s words), trading, cost, the accumulation of innovations, the emergence of complex order from unintentional individual actions, the scarcity of resources and the geometric growth of populations are ideas borrowed from Adam Smith, Thomas Malthus, David Hume and other founders of modern economics. Thus, the economy of nature ceased to be an abstract representation of the universe and became a depiction of the complex web of interactions between biological individuals, species and their environment—the subject matter of ecology. Consequently, Darwin’s main contributions are his transforming biology into a historical science—like geology—and into an economic science.

I take the economy-of-nature principle to be a refinement of the natural selection principle: while it describes general features of the biosphere, it puts emphasis on the intersection between individual biographies and natural selection, and especially on decision-making. On the one hand, the decisions biological individuals make increase or decrease their fitness, and thus good decision-makers are more likely to propagate their genes. On the other hand, natural selection is likely to favor good decision-makers and to get rid of bad decision-makers. Thus, if our best descriptive theories of animal and human economic behavior indicate that all these agents have prosocial preferences and make altruistic decisions, then these preferences and decisions are not maladaptive and irrational. They must have an evolutionary and an economic payoff. Markets and natural selections requires cooperation, even if the deep motivations are partly selfish. Fairness, equity and honesty are social goods in the economy of nature, human and non-human.


  • Bateson, M., D. Nettle, and G. Roberts. 2006. Cues of being watched enhance cooperation in a real-world setting. Biology Letters 12:412-414.
  • Bering, J. M., K. McLeod, and T. K. Shackelford. 2005. Reasoning about Dead Agents Reveals Possible Adaptive Trends. Human Nature 16 (4):360-381.
  • Bolton, G. E., E. Katok, and R. Zwick. 1998. Dictator Game Giving: Rules of Fairness versus Acts of Kindness International Journal of Game Theory 27 269-299
  • Burnham, T. C., and D. D. P. Johnson. 2005. The Biological and Evolutionary Logic of Human Cooperation. Analyse & Kritik 27:113-135.
  • DeBruine, L. M. 2002. Facial resemblance enhances trust. Proc Biol Sci 269 (1498):1307-12.
  • de Waal FBM (2007) With a Little Help from a Friend. PLoS Biol 5(7): e190 doi:10.1371/journal.pbio.0050190
  • Farrelly, D., J. Lazarus, and G. Roberts. 2007. Altruists attract. Evolutionary Psychology 5 (2):313-329.
  • Fehr, E., and U. Fischbacher. 2002. Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal 112:C1-C33.
  • Fehr, Ernst, and Urs Fischbacher. 2003. The nature of human altruism. Nature 425 (6960):785-791.
  • Hamilton, W. D. 1964a. The genetical evolution of social behaviour. I. J Theor Biol 7 (1):1-16.
  • ———. 1964b. The genetical evolution of social behaviour. II. J Theor Biol 7 (1):17-52.
  • Hauser, Marc D. 2006. Moral minds : how nature designed our universal sense of right and wrong. New York: Ecco.
  • Ledyard, J. O. 1995. Public goods: A survey of experimental research. In Handbook of experimental economics, edited by J. H. Kagel and A. E. Roth: Princeton University Press.
  • Haley, K., and D. Fessler. 2005. Nobody’s watching? Subtle cues affect generosity in an anonymous economic game. Evolution and Human Behavior 26 (3):245-56.
  • Hoffman, E., K. Mc Cabe, K. Shachat, and V. Smith. 1994. Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior 7:346–380.
  • Kurzban, Robert, Peter DeScioli, and Erin O'Brien. 2007. Audience effects on moralistic punishment. Evolution and Human Behavior 28 (2):75-84.
  • Madsen, Elainie A., Richard J. Tunney, George Fieldman, Henry C. Plotkin, Robin I. M. Dunbar, Jean-Marie Richardson, and David McFarland. 2007. Kinship and altruism: A cross-cultural experimental study. British Journal of Psychology 98:339-359.
  • Penner, Louis A., John F. Dovidio, Jane A. Piliavin, and David A. Schroeder. 2005. Prosocial behavior: Multilevel Perspectives. Annual Review of Psychology 56 (1):365-392.
  • Okasha, Samir, "Biological Altruism", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2005/entries/altruism-biological/.
  • Stewart-Williams, Steve. 2007. Altruism among kin vs. nonkin: effects of cost of help and reciprocal exchange. Evolution and Human Behavior 28 (3):193-198.
  • Nowak, M. A. 2006. Five Rules for the Evolution of Cooperation. Science 314 (5805):1560-1563.
  • Oosterbeek, H., Randolph S., and G. van de Kuilen. 2004. Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7:171-188.
  • Rilling, J., D. Gutman, T. Zeh, G. Pagnoni, G. Berns, and C. Kilts. 2002. A neural basis for social cooperation. Neuron 35 (2):395-405.
  • Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196
  • Sally, D. 1995. Conversations and cooperation in social dilemmas: a meta-analysis of experiments from 1958 to 1992. Rationality and Society 7:58 – 92
  • Sanfey, A. G., J. K. Rilling, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. 2003. The neural basis of economic decision-making in the Ultimatum Game. Science 300 (5626):1755-8.
  • Shariff, A.F. , and A. Norenzayan. in press. God is watching you: Supernatural agent concepts increase prosocial behavior in an anonymous economic game. Psychological Science.
  • Singer, T., S. J. Kiebel, J. S. Winston, R. J. Dolan, and C. D. Frith. 2004. Brain responses to the acquired moral status of faces. Neuron 41 (4):653-62.
  • Sober, Elliott, and David Sloan Wilson. 1998. Unto others : the evolution and psychology of unselfish behavior. Cambridge, Mass.: Harvard University Press.
  • Stich, S. (forthcoming). Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson's Argument for Psychological Altruism, to appear in Biology and Philosophy.
  • Trivers, R. L. 1971. The Evolution of Reciprocal Altruism. Quarterly Review of Biology 46 (1):35.
  • Warneken F, Hare B, Melis AP, Hanus D, Tomasello M (2007) Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biol 5(7): e184 doi:10.1371/journal.pbio.0050184
  • Zak, P. J., ed. forthcoming. Moral Markets: The Critical Role of Values in the Economy. Princeton, N.J.: Princeton University Press.



7/3/07

The reciprocal rat

Research on reciprocity classically identified three cooperation mechanisms: 
  • kin reciprocity (A helps B because A and B are genetically related)
  • direct reciprocity (A helps B because B has helped A before--"tit for tat")
  • indirect reciprocity (A helps B because B has helped C before)
Recently, many suggested that we should also add a stronger type of reciprocity called, well,  "Strong reciprocity". Strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Strong reciprocity is a uniquely human phenomenon. Until now, only kin and direct reciprocity has been observed in animals. In PloS Biology,  Rutte and Taborsky showed that another reciprocity mechanism could be shared by humans and other animals is present in rats: generalized reciprocity. Contrarily to other kinds of reciprocity, generalized reciprocity does not requires individual identification: in kin, direct and indirect reciprocity, you need first to identify another agent as sharing genes with you, having helped you in the past, or having helped someone else in the past. Generalized reciprocity is more anonymous: since someone help you in the past, you are more willing to help someone in the future regardless of the past and futur agent's identity. People who found a coin in a phone booth are more likely to help a stranger pick up dropped papers than control subjects who had not previously found money. Since you don't need to track and identify other's behaviors,  generalized reciprocity is less cognitively demanding, and hence probably most common in nature. In Rutte and Taborsky's experiment, a rat can pull a stick fixed to a baited tray and produces food (an oat flake) for its 'partner' (another rat); the partner is rewarded but not the 'giver'. It turns out that rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Rats followed a " “anonymous generous tit-for-tat”. 

 


 




Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196

  • Fehr, E., & Rockenbach, B. (2004). Human altruism: economic, neural, and evolutionary perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1-25.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.