Circular altruism at Overcoming Bias
I just wanted to recommend one of their post on "Circular Altruism" that explores the relationships between probabilistic reasonings, its biases, and altruism.
Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection (Nowak, 2006, p. 1563)
To extend Haldane’s famous remark, kinship can explain rescuing drowning people if they are relatives (…); reciprocal altruism if they return the favor (…); indirect reciprocity if a third party returns the favor (…) and signaling if the rescuer is judged more attractive (Farrelly et al., 2007, p. 314)
According to the stats, the 5 mots popular posts on Natural Rationality are:
Another great paper in the 2008 Annual Review of Pyschology:
Putting the Altruism Back into Altruism: The Evolution of Empathy
Evolutionary theory postulates that altruistic behavior evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to anotheras pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the otheras welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory.See also, in In-Mind, a new online magazine about social cognition:
- Putting the Altruism Back into Altruism: The Evolution of Empathy
Frans B.M. de Waal
Annual Review of Psychology, January 2008, Vol. 59
Proponents of the Strong Reciprocity Hypothesis (i.e., Bowles, Gintis, Boyd, Fehr, Heinrich, etc., I will call them “The Collective”) claim that human being are strong reciprocators: they are willing to sacrifice resources in order to reward fair and punish unfair behavior even if there is no direct or future reward. Thus we are, according to the Collective, innately endowed with pro-social preferences and aversion to inequity. Those who advocate strong reciprocity take it to be a a ‘genuine’ altruistic force, not explained by other motives. Strong reciprocity is here contrasted with weaker form of reciprocity, such as: cooperating with someone because of genetic relatedness (kinship), because one follows a tit-for-tat pattern (direct reciprocity), wants to establish a good reputation (indirect reciprocity) or displays signs of power or wealth (coslty signaling). Thus our species is made, ceteris paribus, of altruistic individuals that tend to cooperate with cooperators and punish defectors, even at a cost. Behavioral economics showed how people are willing to cooperate in games such as the prisoner’s dilemma, the ultimatum game or the trust game: they do not cheat in the first one, offer fair split in the second and transfer money in the third.
Could it be possible, however, that this so-called altruism is instrumental? I don’t think it is always, some cases require closer scrutiny. For instance, in the Ultimatum Game, there is a perfectly rational and egoist reason to make a fair offer, such as a 50-50 split: it is the best—from one’s point of view—solution to the trade-off between making a profit and proposing a split that the other player will accept: if you propose more, you loose more money: if you propose less, you risk a rejection. In non-market integrated culture where a 20-80 split is not seen as unfair, proposers routinely offer such splits, because they know it will be accepted.
It can also be instrumental in a more basic sense, for instance in participating to the propagation of our genes. For instance, (Madsen et al., 2007) showed that individuals behave more altruistically toward their own kin when there is a significant genuine cost (such as pain), an attitude also mirrored in study with questionnaires (Stewart-Williams, 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Finally, other studies showed that facial resemblances enhance trust (DeBruine, 2002). In each cases, we see a mechanisms whose function is to negotiate our investments in relationships in order to promote the copies of our genes housed in people who are, or look like, or could help us expand our kin. For instance, by simply viewing lingerie or picture of sexy women, men behave more fairly in the ultimatum game (Van den Bergh & Dewitte, 2006).
Many of these so-called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton et al., 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al., 1994). According to the present framework, it would be because there is no advantage in being fair.
When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson et al., 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks(Bering et al., 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff & Norenzayan, in press).
These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban et al., 2007) showed that with a dozen participants, punishment expenditure tripled. In the trust game, players apply learned social rules and trust-building routines, but they hate when cheater enjoy what they themselves refrain from enjoying. Thus it feels good to reset the equilibrium. Again, appareant altruism is instrumental in personal satisfaction, at least in some occasions.
Hardy & Van Vugt, in their theory of competitive altruism suggest that
individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable (Hardy & Van Vugt, 2006)
Maybe agents are attempting to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social. A possible alternative approach is what I call ‘methodological hedonism’: let’s assume, at least for identifying cognitive mechanisms, that the brain, when in function normally, tries to maximize hedonic feelings, even in moral behavior. We use feelings to anticipate feelings in order to control our behavior toward a maximization of positive feelings and a minimization of negative ones. The ‘hot logic’ of emotions is more realist than the cold logic of traditional game theory but still preserve the idea of utility maximization (although “value” would be more appropriate). In this framework, altruistic behavior is possible, but need not to rely on altruistic cognition. Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. The initial hedonism is gradually modulated by social norms, by which agents learn how to maximize their utility given the norms. Luckily, however, biological and cultural evolution favored patterns of self-interest that promote social order to a certain extent: institutions, social norms, routines and cultures tend to structure morally our behavior. Thus understanding morality may amount to understand how individual’s egoism is modulated by social processes. There might be no need to posit an innate Strong Reciprocity. Or at least it is worth to explore other avenues!
Suggested reading:
References
Categories: altruism, behavioral economics, cognition, cooperation, decision, fairness, psychology
It is not from the benevolence of the butcher, the brewer or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. Nobody but a beggar chooses to depend chiefly upon the benevolence of their fellow-citizens.Everybody will remember the famous Gordon Gekko's speech in Oliver Stone's Wall Street (1987):
The point is, ladies and gentlemen, that greed—for lack of a better word—is good. Greed is right. Greed works. Greed clarifies, cuts through, and captures the essence of the evolutionary spirit. Greed, in all of its forms—greed for life, for money, for love, knowledge—has marked the upward surge of mankind.
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.In "Adam Smith, Behavioral Economist", Ashraf et al. (2005, The Journal of Economic Perspectives, 19, 131-145) discusses the relevance of Smith for experimental economics. In "The Two Faces of Adam Smith" (Southern Economic Journal, 65, 1-19), another Smith (Vernon) analyses the dual nature of Smith's (Adam) writing.
Is Greed Good?There will be a conference a conference to commemorate the 250th anniversary of The Theory of Moral Sentiments in 2009 in Oxford (see CFP on PhilEcon website).
Economists are finding that social concerns often trump selfishness in financial decision making, a view that helps to explain why tens of millions of people send money to strangers they find on the Internet
By Christoph Uhlhaas
[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".
people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers
Categories: altruism, behavioral economics, cognition, cooperation, decision, dictator, empathy, morality, reciprocity
It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo). They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.
Categories: altruism, behavioral economics, brain, cognition, cooperation, decision, economics, empathy, fairness, morality, neuroeconomics, neuroscience, oxytocin, reciprocity, ultimatum
Phoebe: I just found a selfless good deed; I went to the park and let a bee sting me.Altruism is a lively research topic. The evolutionary foundations, neural substrates, psychological mechanisms, behavioral manifestations, formal modeling and philosophical analyses of cooperation constitute a coherent—although not unified—field of inquiry. See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in Penner et al. 2005; Hauser 2006; Fehr and Fischbacher 2002; Fehr and Fischbacher 2003. The nature of prosocial behavior, from kin selection to animal cooperation to human morality can be considered as a progressive Lakatosian research programs. Altruism has a great conceptual "sex-appeal" because it is mystery for two types of theoreticians: biologists and economists. They both wonder why an animal or an economic agent would help another: since these agents maximize fitness/utility, altruistic behavior is suboptimal. Altruims (help, trust, fairness, etc.) seems intuitively incoherent with economic rationality and biological adaptation, with markets and natural selection. Or is it?
Joey: How is that a good deed?
Phoebe: Because now the bee gets to look tough in front of his bee friends. The bee is happy and I am not.
Joey: Now you know the bee probably died when he stung you?
Phoebe: Dammit!
- [From Friends, episode 101]
The research reported in this book revealed that most economic exchange, whether with a stranger or a known individual, relies on character values such as honesty, trust, reliability, and fairness. Such values, we argue, arise in the normal course of human interactions, without overt enforcement—lawyers, judges or the
police are present in a paucity of economic transactions (...). Markets are moral in two senses. Moral behavior is necessary for exchange in moderately regulated markets, for example, to reduce cheating without exorbitant transactions costs. In addition, market exchange itself can lead to an understanding of fair-play that can build social capital in nonmarket settings. (Zak, forthcoming)
See how this claim is similar to :
The two fundamental principles of evolution are mutation and natural selection. But evolution is constructive because of cooperation. New levels of organization evolve when the competing units on the lower level begin to cooperate. Cooperation allows specialization and thereby promotes biological diversity. Cooperation is the secret behind the open-endedness of the evolutionary process. Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection. (Nowak, 2006)
Where human behaviour is concerned, the distinction between biological altruism, defined in terms of fitness consequences, and ‘real’ altruism, defined in terms of the agent's conscious intentions to help others, does make sense. (Sometimes the label ‘psychological altruism’ is used instead of ‘real’ altruism.) What is the relationship between these two concepts? They appear to be independent in both directions (...). An action performed with the conscious intention of helping another human being may not affect their biological fitness at all, so would not count as altruistic in the biological sense. Conversely, an action undertaken for purely self-interested reasons, i.e. without the conscious intention of helping another, may boost their biological fitness tremendously (Biological Altruism, Stanford Encyclopedia of Philosophy; see also a forthcoming paper by Stephen Stich and the classic Sober & Wilson 1998).
(image from Warneken et al video)
In the first experiment, the chimpanzee saw a person unsuccessfully reach through the bars for a stick on the other side, too far away for the person, but within reach of the ape. The chimpanzees spontaneously helped the reaching person regardless of whether this yielded a reward, or not. A similar experiment with 18-month-old children gave exactly the same outcome. Obviously, both apes and young children are willing to help, especially when they see someone struggling to reach a goal. The second experiment increased the cost of helping. The chimpanzees were still willing to help, however, even though now they had to climb up a couple of meters, and the children still helped even after obstacles had been put in their way. Rewards had been eliminated altogether this time, but this hardly seemed to matter. One could, of course, argue that chimpanzees living in a sanctuary help humans because they depend on them for food and shelter. How familiar they are with the person in question may be secondary if they simply have learned to be nice to the bipedal species that takes care of them. The third and final experiment therefore tested the apes' willingness to help each other, which, from an evolutionary perspective, is also the only situation that matters. The set-up was slightly more complex. One chimpanzee, the Observer, would watch another, its Partner, try to enter a closed room with food. The only way for the Partner to enter this room would be if a chain blocking the door were removed. This chain was beyond the Partner's control—only the Observer could untie it. Admittedly, the outcome of this particular experiment surprised even me—and I am probably the biggest believer in primate empathy and altruism. I would not have been sure what to predict given that all of the food would go to the Partner, thus creating potential envy in the Observer. Yet, the results were unequivocal: Observers removed the peg holding the chain, thus yielding their Partner access to the room with food (de Waal)
Categories: altruism, cognition, cooperation, decision, evolution, morality, reciprocity
The presentation of a talk I recently gave in Montreal at Cognitio 2007.
The experimental study of economic exchange behavior revealed many discrepancies between normative theory of strategic rationality (game theory) and actual behavior. In many games games where defection and competition is expected by game theory, subjects robustly display cooperative behavior. In the ultimatum game, for instance, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer; if the responder refuses, both players get nothing. The rational outcome is a minimal offer by the first player and an unconditional acceptance of the offer by the second. In fact, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Cooperative and prosocial behavior is also observed in similar games, e.g. the trust game and the prisoner’s dilemma (Camerer, 2003). Neuroeconomics, the study of the neural mechanisms of decision-making (Glimcher, 2003), also showed that subjects seems to entertain prosocial preferences. Brain scans of people playing the ultimatum game indicates that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula, an area involved in disgust and other negative emotional responses, is more active when unfair offers are proposed (Sanfey et al., 2003). In the prisoner’s dilemma and the trust game, similar activations have been found: cooperation and punishment of unfair players elicit positive affective emotions, while unfairness elicit negative one (de Quervain et al., 2004; Rilling et al., 2002).
The received view of these behavioral and neural data is that human beings are endowed with genuinely altruistic cognitive mechanisms, a view now labelled “Strong Reciprocity” (SR). According to SR, an innate propensity for altruistic punishment and altruistic rewarding makes us averse to inequity (Fehr & Rockenbach, 2004). In this talk, I argue that this moral optimism is far-fetched. Yes, the ‘cold logic’ model of rationality is not an accurate description of our decision-making mechanisms, but the SR model, I shall argue, relies on unwarranted assumptions. I present another model–the ‘hot logic’ approach–according to which human agents are selfish agents adapted to trade, exchange and partner selection in biological markets (Noë et al., 2001). Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. This initial hedonism is gradually modulated by social norms, by which agents learn how to maximise their utility given the norms. The ‘hot logic’ approach provide a simpler explanation of cooperation and fairness: subjects make ‘fair’ offers in the ultimatum game because they know their offer would be rejected otherwise. Responders affective reaction to ‘unfair offers’ is in fact a reaction to the loss of an expected monetary gain: they anticipated that the proposer would comply with social norms. This claim is supported by other imaging studies showing that loss of money can be aversive, and that actual and counterfactual utility recruit the same neural resources (Delgado et al., 2006; Montague et al., 2006). This approach explains why subjects make lower offers in the dictator game (an ultimatum game in which the responder make an offer and the responder's role is entirely passive) than in the ultimatum, why, when using a computer displaying eyespots, almost twice as many participants transfer money in the dictator (Haley & Fessler, 2005), and why attractive people are offered more in the ultimatum (Solnick & Schweitzer, 1999). In every case, agents seek to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social (reputation, acceptance, etc.). SR is thus seen as cooperative habits that are not repaid (Burnham & Johnson, 2005).
Categories: altruism, cognition, decision, natural rationality, neuroeconomics, Stront Reciprocity
In a letter to Nature , a group of political scientist and anthropologist report an experiment designed to test equality preference and inequality aversions. the design was simple:
Subjects are divided into groups having four anonymous members each. Each player receives a sum of money randomly generated by a computer. Subjects are shown the payoffs of other group members for that round and are then provided an opportunity to give 'negative' or 'positive' tokens to other players. Each negative token reduces the purchaser's payoff by one monetary unit (MU) and decreases the payoff of a targeted individual by three MUs; positive tokens decrease the purchaser's payoff by one monetary unit (MU) and increase the targeted individual's payoff by three MUs. Groups are randomized after each round to prevent reputation from influencing decisions; interactions between players are strictly anonymous and subjects know this. Also, by allowing participants more than one behavioural alternative, the experiment eliminates possible experimenter demand effects—if subjects were only permitted to punish, they might engage in this behaviour because they believe it is what the experimenters want.The results support what is often referred to as the "Robin Hood effect": richer individuals were heavily penalized, while poorer received more gift. This would support the hypothesis of Strong Reciprocity (SR), put forth by Fehr, Camerer, Gintins, and many other scholar in behavioral economics. SR implies that individuals will cooperate with cooperator (reciprocal altruism), will not cooperate with cheaters, and are even ready to punish those who cheat others (altruistic punishment):
“people tend to behave prosocially and punish antisocial behavior at cost to themselves, even when the probability of future interactions is low or zero. We call this strong reciprocity." (Gintis, H. (2000). Strong reciprocity and human sociality. Journal of Theoretical Biology, 206(2), p. 177)