Natural Rationality | decision-making in the economy of nature
Showing posts with label decision. Show all posts
Showing posts with label decision. Show all posts

3/11/08

Why Neuroeconomics Needs a Concept of (Natural) Rationality

ResearchBlogging.orgNeuroeconomists (more than “decision neuroscientists”) often report their finding as strong evidence against the rationality of decision-makers. In the case of cooperation it is often claimed that emotions motivate cooperation since neural activity elicited by cooperation overlaps with neural activity elicited by hedonic rewards (Fehr & Camerer, 2007). Also, when subjects have to choose whether or not they would purchase a product, desirable products cause activation in the nucleus accumbens (associated with anticipation of pleasure). However, if the price is seen as exaggerated, activity is detected in the insula (involved in disgust and fear; Knutson 2007).

The accumulation of evidence about the engagement of affective areas in decison-making is undisputable, and seems to make a strong case against a once pervasive “rationalist” vision of decision-making in cognitive science and economics. This is not, however, a definitive argument for emotivism (we choose with our "gut feelings") and irrationalism. For at least three reasons (methodological, empirical and conceptual), these findings should not be seen as supporting an emotivist account.

First, characterizing a brain area as “affective” or “emotional” is misleading. There is no clear distinction, in the brain, between affective and cognitive areas. For instance, the anterior insula is involved in disgust, but also in disbelief (Harris et al., 2007). A high-level task such as cognitive control (e.g. holding items in working memory in a goal-oriented task) requires both “affective” and “cognitive” areas (Pessoa, 2008). The affective/cognitive distinction is a folk-psychological one, not a reflection of brain anatomy and connectivity. There is a certain degree of specialization, but generally speaking any task recruits a wide arrays of areas, and each area is redeployed in many tasks. In complex being like us, so-called “affective” areas are never purely affective: they always contribute to higher-level cognition, such as logical reasoning (Houde & Tzourio-Mazoyer, 2003). Similarly, while the amygdala has been often described as a “fear center”, its function is much more complex, as it modulates emotional information, react to unexpected stimuli and is heavily recruited in visual attention, a “cognitive” function. It is therefore wrong to consider “affective” areas as small emotional agents that are happy or sad and make us happy of sad. Instead of employing folk-psychological categories, their functional contribution should be understood in computational terms: how they process signals, how information is routed between areas and how they affect behavior and thought.

Second, even if there are affective areas, they are always complemented or supplemented by “cognitive” ones: the dorsolateral prefrontal cortex (DLPFC) for instance (involved in cognitive control and goal maintenance), is recruited in almost all decision-making task, and has been shown to be involved in norm-compliant behavior and purchasing decisions. In the ultimatum game, beside the anterior insula, two other areas are recruited: the DLPFC and the anterior cingulate cortex (ACC), involved in cognitive conflict and emotional modulation. Explainiations of ultimatum decisions spell out neural information-processing mechanisms, not “emotions”.

Check for instance the neural circuitry involved in cognitive control: you would think it is only prefrontal areas, but as it turns out, "cognitive" and "affective" area sare required for this competence:


[Legend: This extended control circuit contains traditional control areas, such as the anterior cingulate cortex (ACC) and the lateral prefrontal cortex (LPFC), in addition to other areas commonly linked to affect (amygdala) and motivation (nucleus accumbens). Diffuse, modulatory effects are shown in green and originate from dopamine-rich neurons from the ventral tegmental area (VTA). The circuit highlights the cognitive–affective nature of executive control, in contrast to more purely cognitive-control proposals. Several connections are not shown to simplify the diagram. Line thickness indicates approximate connection strength. OFC, orbitofrontal cortex.From Pessoa, 2008]

As Michael Anderson pointed out in a series of papers (2007a and b, among others), there is many-to-many mapping between brain functions and cognitive functions. So the concept of "emotional areas" should be banned from neuroeconomics vocabulary before it is too late.

Third, a point that has been neglected by many research about decision-making neural activation of a particular brain area is always explanatory with regard to its contribution in understanding personal-level properties. If we learn that the anterior insula react to unfair offers, we are not singling out the function of this area, but explaining how the person’s decision is responsive to a particular type of valuation. The basic unit of analysis of decisions is not neurons, but judgments. We may study sub-judgmental (e.g. neural) mechanisms and how they contribute to judgment formation; or we may study supra-judgmental mechanisms (e.g. reasoning) and how they articulate judgments. Emotions, as long as they are understood as affective reactions, are not judgments: they either contribute to judgments or are construed as judgments. In both case, the category “emotions” seems superfluous for explaining the nature of the judgment itself. Thus, if judgments are the basic unit of analysis, brain areas are explanatory insofar as they make explicit how individuals arrive at a certain judgment, how it is implemented, etc: what kind of neural computations are carried out? Take, for example, cooperation in the prisoner's dilemma. Imaging studies show that when high-psychopathy and low-psychopathy subjects choose to cooperate, different neural activity is observed: the former use more prefrontal areas than the latter, indicating that cooperation is more efforful (see this post). This is instructive: we learn something about the information- processing not about "emotions" or "reason".

In the end, we want to know how these mechanisms fix beliefs, desires and intentions: neuroeconomics can be informative as long as it aims at deciphering human natural rationality.


References
  • Anderson, M. L. (2007a). Evolution of Cognitive Function Via Redeployment of Brain Areas. Neuroscientist, 13(1), 13-21.
  • Anderson, M. L. (2007b). The Massive Redeployment Hypothesis and the Functional Topography of the Brain. Philosophical Psychology, 20(2), 143 - 174.
  • Fehr, E., & Camerer, C. F. (2007). Social Neuroeconomics: The Neural Circuitry of Social Preferences. Trends Cogn Sci.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology,
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Spitzer, M., Fischbacher, U., Herrnberger, B., Gron, G., & Fehr, E. (2007). The Neural Signature of Social Norm Compliance. Neuron, 56(1), 185-196.




The bounded rationality of self-control

ResearchBlogging.org Rogue traders such as Jérôme Kerviel or Nick Leeson engage in criminal, fraudulent and high-risk financial activities that often result in huge losses ($7 billion for Kerviel) or financial catastrophe (the bankruptcy of the 233 years-old bank who employed Leeson). Why would anyone do that?

A popular answer is that money is like a drug, and that Kerviel had behaved "like a financial drug addict" . And truly, it is. We crave money and feel its rewarding properties when our subcortical areas light up as if we were having sex or eating Port-Royal Cupcakes (just reading the list of ingredients of the latter is enough for me!). Money hits our sweet spot, and elicits activity in emotional and emotion-related areas. Thus rogue traders are like cocaine addicts, unable to stop the never-ending search for the ultimate buzz.

This is fine, but incomplete and partly misleading. We all have temptations, drives, desires, emotions, addictions, etc., and some of us experience them more vividly. The interesting question is not how intense the money thrill is, but how weak is self-control can be. By “self-control”, I mean the vetoing capacity we have: when we resist eating fat food, smoking (oh, just one, I swear) another cigarette, insulting that person that laugh at us, flirting with that cute colleague of yours, etc. Living in society requires that we regulate our behavior and—more often than not—doing what we should do instead of what we want to do. It seems that rogue traders, like addicts and criminals, lacks a certain capacity to implement self-control and normative regulation.

Traditional accounts of self-control construe this capacity as a cognitive, rational faculty. New developments in psychology suggest that it is more like a muscle than a cognitive process. If self-control is a cognitive process, activating it should speed up further self-control since it becomes highly accessible; priming, for instance, speeds up recognition. To the contrary, if self-control is a limited resource, using it should impair or slow down further self-control (since part of the resource will be spent the first time). Many experiments support the second options: self-control and inhibitory control are limited resources, a phenomenon Roy Baumeister and his colleagues called ego depletion: the

temporary reduction in the self's capacity or willingness to engage in volitional action (including controlling the environment, controlling the self, making choices, and initiating action) caused by prior exercise of volition. (Baumeister et al., 1998, p. 1253)

For instance, subjects who have to suppress their emotions while watching an upsetting movie perform worse on the Stroop task (Inzlicht & Gutsell, 2007). EEG indicates less activity in the ACC in subjects who had to inhibit their affective reactions. Subjects who had to reluctantly eat radishes abandon problem-solving earlier than subject who had chocolate willingly. Taking responsibility for and producing voluntarily a counterattitudinal speech (a speech that expresses an opinion contrary to its locutor’s) also reduced perseverance; producing the speech without taking responsibility did not) (Baumeister et al., 1998).

Self-control literally requires energy. Subjects asked to suppress facial reactions (e.g. smiles) when watching a movie have lower blood glucose levels, suggesting higher energy consumption. Control subjects (free to react how they want) had the same blood glucose levels before and after the movie, and performed better than control subjects on a Stroop Task. Restoring glucose levels with a sugar-sweetened lemonade (instead of artificially-sweetened beverages, without glucose) also increases performance. Self-control failures happen more often in situation where blood glucose levels is low. In a literature review, Gailliot et al show that lack of cognitive, behavioral and emotional control is systematically associated with hypoglycemia or hypoglycemic individuals. Thought suppression, emotional inhibition, attention control, and refraining from criminal behavior are impaired in individual with low-level blood glucose (Gailliot & Baumeister, 2007).

The bottom line is: self-control takes energy and is a limited resource; immoral actions happen not only because people are emotionally driven toward certain rewards, but because, for one reason or another, their “mental brakes” cannot stop their drives. Knowing that, as rational agents, we should allocate wisely our self-control resources: for example, by not putting ourselves in situations where we will have to spend our self-control without a good (in a utility-maximizing or moral sense) return on investment.


References
  • Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego Depletion: Is the Active Self a Limited Resource? Journal of Personality and Social Psychology, 74(5), 1252-1265.
  • Gailliot, M. T., & Baumeister, R. F. (2007). The Physiology of Willpower: Linking Blood Glucose to Self-Control. Personality and Social Psychology Review, 11(4), 303-327.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology, 9999(9999), NA.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Inzlicht, M., & Gutsell, J. N. (2007). Running on Empty: Neural Signals for Self-Control Failure. Psychological Science, 18(11), 933-937.
  • Pessoa, L. (2008). On the Relationship between Emotion and Cognition. Nat Rev Neurosci, 9(2), 148-158.



3/5/08

Coke, Pepsi, and The Brain--redux



ResearchBlogging.orgIn 2004, a team of neuroscientist conducted a new version of the Pepsi Challenge. Not only did participants had to indicate which cola they prefer, but they had to do this while their brain was scanned. Results showed that when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and this effect was correlated with a stronger activity in the medial prefrontal cortex. This was due, according to the researchers (and many neuromarketers) to the effectiveness of Coke’s branding strategies. Somehow, Coke managed to trigger certains associations in our brain, and simply seeing their logo is enough to make a drink taste better. A similar effect was observed with costly wine bottles. Non-experts feels that the same bottle of wine with an expensive price tag is more appreciated than with a cheap one, and the expensive one elicit stronger activity in orbitofrontal cortex (Plassmann et al, 2008). Again, an area important in emotional processing.

A new study (Koenigs & Tranel, 2008) showed that some people are less sensitive to this branding effect: subjects with ventromedial prefrontal cortex damage (an area involved in emotional processing). Unlike their normal counterpart, these patients maintained their preference for Pepsi. Thus, as the authors conclude “[l]acking the normal affective processing, VMPC patients may base their brand preference primarily on their taste preference.” The VMPC thus act as a gate that let emotional memories affect present evaluations.

  • Koenigs, M., Tranel, D. (2007). Prefrontal cortex damage abolishes brand-cued changes in cola preference. Social Cognitive and Affective Neuroscience, 3(1), 1-6. DOI: 10.1093/scan/nsm032
  • McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
  • Plassmann, H., O'Doherty, J., Shiv, B., Rangel, A. (2008). Marketing actions can modulate neural representations of experienced pleasantness. Proceedings of the National Academy of Sciences, 105(3), 1050-1054. DOI: 10.1073/pnas.0706929105





1/29/08

Moods and decision-making



In the last issue of Judgment and Decision Making (free online a sharp study by de Vries et al. illustrates how mood affect cognitive processing. After watching either a clip from the Muppet's Show or a clip from Schindler's list, participants played the Iowa Gambling Task (see description here). People who watch the funny clip (the Muppets, as you might have guest) scored better:
[at an early stage of the game] after experiencing the first losses in the bad decks, participants in a happy mood state outperformed participants in a sad mood state (de Vries et al., 2008 p. 48)
After a couple of trials, however, sad and happy subjects scored identically:


The authors do not argue that being in good mood warrants success, but suggest that certain moods may have adaptive value in certain situations (when the optimal choice requires analytical thinking, affective reactions seems more distracting). So next time you have a test, choose the right mood !


References



1/16/08

New entry of the Stanford Encyclopedia of Philosophy

A new entry might interest decision-making specialists, ethicists and philosophers:

Decision-Making Capacity

In many Western jurisdictions, the law presumes that adult persons, and sometimes children that meet certain criteria, are capable of making their own health care decisions; for example, consenting to a particular medical treatment, or consenting to participate in a research trial. But what exactly does it mean to say that a subject has or lacks the requisite capacity to decide? This last question has to do with what is commonly called “decisional capacity,” a central concept in health care law and ethics, and increasingly an independent topic of philosophical inquiry.

Decisional capacity can be defined as the ability of health care subjects to make their own health care decisions. Questions of ‘capacity’ sometimes extend to other contexts, such as capacity to stand trial in a court of law, and the ability to make decisions that relate to personal care and finances. However, for the purposes of this discussion, the notion of decisional capacity will be limited to health care contexts only; most notably, those where decisions to consent to or refuse treatment are concerned.

The combined theoretical and practical nature of decisional capacity in the area of consent is probably one of the things that makes it so intellectually compelling to philosophers who write about it. But this is still largely uncultivated philosophical territory. One reason is the highly interdisciplinary and rapidly changing nature of the field. Clinical methods and tests to assess capacity are proliferating. The law is also increasingly being called upon to respond to these clinical developments. All of this makes for a very eclectic and challenging field of inquiry. Philosophers must tread carefully if their contributions are to be timely and relevant.





11/16/07

The Turing Test for Toddlers and Cockroaches

Two new studies that show how robots may be accepted in a community of cockroaches toddlers. In the first case, the robots were able to influence collective decision-making; in the second, the integration of the robot was related with the unpredictability of its behavior.


Halloy, J., Sempo, G., Caprari, G., Rivault, C., Asadpour, M., Tache, F., et al. (2007). Social Integration of Robots into Groups of Cockroaches to Control Self-Organized Choices. Science, 318(5853), 1155-1158. http://www.sciencemag.org/cgi/content/abstract/318/5853/1155

Tanaka, F., Cicourel, A., & Movellan, J. R. (2007). Socialization between toddlers and robots at an early childhood education center. Proceedings of the National Academy of Sciences, 104(46), 17954-17958.
http://www.pnas.org/cgi/content/abstract/104/46/17954



11/13/07

Decision-Making in Robotics and Psychology: A Distributed Account

Forthcoming a special issue of New Ideas in Psychology on Cognitive Robotics & Theoretical Psychology, edited by Tom Ziemke & Mark Bickhard:

Hardy-Vallée, B. (in press). Decision-Making in Robotics and Psychology: A Distributed Account. New Ideas in Psychology

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. The psychological study of decision-making assumes a certain conception of its nature and mechanisms that has been shown wrong by research in robotics. Robotics indicates that decision-making is not—or at least not only—an intellectual task, but also a process of dynamic behavioral control, mediated by embodied and situated sensorimotor interaction. The implications of this conception for psychology are discussed.
[PDF]



11/7/07

New Paper on Oxytocin and Generosity

In Plos One:

see this previous post for a presentation of the research.



10/23/07

Exploration, Exploitation and Rationality

A little introduction to what I consider to be the Mother of All Problems: the exploration-exploitation trade-off.

Let's firt draw a distinction between first- and second-order uncertainty. Knowing that a source of reward (or money, or food, etc.) will be rewarding in 70% of the occasions is uncertain knowledge because one does not know for sure what will be the next outcome (one can only know that there is a 70% probability that it is a reward). In some situations however, uncertainty can be radical, or second-order uncertainty: even the probabilities are unknown. Under radical uncertainty, cognitive agents must learn reward probabilities. Learners must, at the same time, explore their environment in order to gather information about its payoff structure and exploit this information to obtain reward. They face a deep problem—known as the exploration/exploitation tradeoff—because they cannot do both at the same time: you cannot explore all the time, you cannot exploit all the time, you must reduce exploration but cannot eliminate it. This tradeoff is usually modeled with the K-armed bandit problem.

Suppose an agent has n coins to spend in a slot machine with K arms (here K=2 and we will suppose that one arm is high-paying and the other low-paying, although the agent does not know that). The only way the agent has access to the arms’ rate of payment – and obtains reward – is by pulling them. Hence she must find an optimal tradeoff when spending its coins: trying another arm just to see how it pays or staying with the one who already paid? The goal is not only to maximize reward, but also to maximize reward while obtaining information about the arm’s rate. The process can be erroneous in two different ways: either the player can be victim of a false negative (a low-paying sequence of the high-paying arm) or false positive (a high-paying sequence paying of the low-paying paying arm).

To solve this problem, the optimal solution is to compute an index for every arm, updating this index according to the arm’s payoff and choosing the arm that has the greater index (Gittins, 1989). In the long run, this strategies amount to following decision theory after a learning phase. But as soon as switching from one arm to another has a cost, as Banks & Sundaram (1994) showed, the index strategies cannot converge towards an optimal solution. A huge literature in optimization theory, economics, management and machine learning addresses this problem (Kaelbling et al., 1996; Sundaram, 2003; Tackseung, 2004). Studies of humans or animals explicitly submitted to bandit problems, however, show that subjects tend to rely on the matching strategy (Estes, 1954). They match the probability of action with the probability of reward. In one study, for instance, (Meyer & Shi, 1995), subjects were required to select between two icons displayed on a computer screen; after each selection, a slider bar indicated the actual amount of reward obtained. The matching strategy predicted the subject’s behavior, and the same results hold for monkeys in a similar task (Bayer & Glimcher, 2005; Morris et al., 2006).

The important thing with this trade-off, is its lack of a priori solutions. Decision theory works well when we know the probabilities and the utilities, but what can we do when we don’t have them? We learn. This is the heart of natural rationality: crafting solutions—under radical uncertainty and non-stationary environments—for problems that may not have an optimal solution. Going from second- to first-order uncertainty.



See also:


References

  • Banks, J. S., & Sundaram, R. K. (1994). Switching Costs and the Gittins Index. Econometrica: Journal of the Econometric Society, 62(3), 687-694.
  • Bayer, H. M., & Glimcher, P. W. (2005). Midbrain Dopamine Neurons Encode a Quantitative Reward Prediction Error Signal. Neuron, 47(1), 129.
  • Estes, W. K. (1954). Individual Behavior in Uncertain Situations: An Interpretation in Terms of Statistical Association Theory. In R. M. Thrall, C. H. Coombs & R. L. Davies (Eds.), Decision Processes (pp. 127-137). New York: Wiley.
  • Gittins, J. C. (1989). Multi-Armed Bandit Allocation Indices. New York: Wiley.
  • Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4, 237-285.
  • Meyer, R. J., & Shi, Y. (1995). Sequential Choice under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem. Management Science, 41(5), 817-834.
  • Morris, G., Nevet, A., Arkadir, D., Vaadia, E., & Bergman, H. (2006). Midbrain Dopamine Neurons Encode Decisions for Future Action. Nat Neurosci, 9(8), 1057-1063.
  • Sundaram, R. K. (2003). Generalized Bandit Problems: Working Paper, Stern School of Business.
  • Tackseung, J. (2004). A Survey on the Bandit Problem with Switching Costs. De Economist, V152(4), 513-541.
  • Yen, G., Yang, F., & Hickey, T. (2002). Coordination of Exploration and Exploitation in a Dynamic Environment. International Journal of Smart Engineering System Design, 4(3), 177-182.



10/20/07

Gains and Losses in the Brain

A new lesion study that suggests a dissociation between the neural processsing of gains and losses:

We found that individuals with lesions to the amygdala, an area responsible for processing emotional responses, displayed impaired decision making when considering potential gains, but not when considering potential losses. In contrast, patients with damage to the ventromedial prefrontal cortex, an area responsible for integrating cognitive and emotional information, showed deficits in both domains. We argue that this dissociation provides evidence that adaptive decision making for risks involving potential losses may be more difficult to disrupt than adaptive decision making for risks involving potential gains


Weller, J. A., Levin, I. P., Shiv, B., & Bechara, A. (2007). Neural Correlates of Adaptive Decision Making for Risky Gains and Losses. Psychological Science, 18(11), 958-964.



10/12/07

The Dictator Game and Radiohead.


As you may know, Radiohead recently announced that they would let fans decide what to pay for its new album, In Rainbows. The situation is thus similar (but not exactly) to a Dictator Game: player A spits a "pie" between her and player B, but B accepts whatever A offers. Thus, contrarily to the Ultimatum Game, B's decisions or reactions has no influence on A's choice behavior. Radiohead fans were thus in a position similar to A's position. If we make the assumption that they framed the situation as a purchasing one in which they choose how much of the CD price they want to split between them and the band, and given that a CD is typically priced £1o (roughly 20 U.S.$), then the fans are choosin how to split 10£ between them and Radiohead. Usually, experimental studies of the Dictator Games shows that 70% of the subjects (A) transfer some amount to Players B, and transfer an average of 24% of the initial endowment (Forsythe et al. (1994). Hence if these results can generalized to the "buy Radiohead album" game, it would suggest that about 70% of those who download the album would pay an average of £2.4 , while 30% would pay nothing. An online survey (by The Times) showed that this prediction is no too far from the truth: a third of the fans paid nothing, and most paid an average of £4.

An internet survey of 3,000 people who downloaded the album found that most paid an average of £4, although there was a hardcore of 67 fans who thought that the record was worth more than £10 and a further 12 who claimed to have paid more than £40.

Radiohead could have earn more money just by using a simple trick: displaying a pair of eyes somewhere on the website. With this simple trick, Bateson et al. dicover that people contribute 3 times more in an honesty box for coffee when there is a pair of eyes than when there is pictures of a flowers (Bateson et al., 2006)

Also, when a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005).

The New York Times has a good piece on fan's motivation to pay, with an interview of George Loewenstein: Radiohead Fans, Guided by Conscience (and Budget).

Related posts

References
  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Forsythe, R., J. L. Horowitz, N. Savin, and M. Sefton, (1994). Fairness in Simple Bargaining Experiments. Games and Economic Behavior, vol. 6(3), 347–369.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Leeds, J. (2007). Radiohead to Let Fans Decide What to Pay for Its New Album. The New York Times.
  • How much is Radiohead’s online album worth? Nothing at all, say a third of fans. The Times.
  • http://www.whatpricedidyouchose.com



A roundup of the most popular posts

According to the stats, the 5 mots popular posts on Natural Rationality are:

  1. Strong reciprocity, altruism and egoism
  2. What is Wrong with the Psychology of Decision-Making?
  3. My brain has a politics of its own: neuropolitic musing on values and signal detection
  4. Rational performance and behavioral ecology
  5. Natural Rationality for Newbies

Enjoy!



10/10/07

Fairness and Schizophrenia in the Ultimatum

For the first time, a study look at schizophrenic patient behavior in the Ultimatum Game. Other studies of schizophrenic choice behavior revealed that they have difficulty in decisions under ambiguity and uncertainty (Lee et al, 2007), have a slight preference for immediate over long-term rewards, (Heerey et al, 2007), exhibit "strategic stiffness" (sticking to a strategy in sequential decision-making without integrating the outcomes of past choices; Kim et al, 2007), perform worse in the Iowa Gambling Task (Sevy et al. 2007)

A research team from Israel run a Ultimatum experiment with schizophrenic subjects (plus two control group, one depressive, one non-clinical). They had to split 20 New Israeli Shekels (NIS) (about 5 US$). Although schizophrenic patients' Responder behavior was not different from control group, their Proposer behavior was different: they tended to be less strategic.

With respect to offer level, results fall into three categories, fair (10 NIS), unfair (less than 10 NIS), and hyper-fair (more than 10 NIS). Schizophrenic patients tended to make less 'unfair' offer, and more 'hyper-fair' offer. Men were more generous than women.

According to the authors,

for schizophrenic Proposers, the possibility of dividing the money evenly was as reasonable as for healthy Proposers, whereas the option of being hyper-fair appears to be as reasonable as being unfair, in contrast to the pattern for healthy Proposers.
Agay et al. also studied the distribution of Proposers types according to their pattern of sequential decisions (how their second offer compared to their first). They identified three types:
  1. "‘Strong-strategic’ Proposers are those who adjusted their 2nd offer according to the response to their 1st offer, that is, raised their 2nd offer after their 1st one was rejected, or lowered their 2nd offer after their 1st offer was accepted.
  2. Weak-strategic’ Proposers are those who perseverated, that is, their 2nd offer was the same as their 1st offer.
  3. Finally, ‘non-strategic’ Proposers are those who unreasonably reduced their offer after a rejection, or raised their offer after an acceptance."
20% of the schizoprenic group are non-strategic, while none of the healthy subjects are non-strategic.


the highest proportion of non-strategic Proposers is in the schizophrenic group
The authors do not offer much explication for these results:

In the present framework, schizophrenic patients seemed to deal with the cognition-emotion conflict described in the fMRI study of Sanfey et al. (2003) [NOTE: the authors of the first neuroeconomics Ultimatum study] in a manner similar to that of healthy controls. However, it is important to note that the low proportion of rejections throughout the whole experiment makes this conclusion questionable.
Another study, however, shows that "siblings of patients with schizophrenia rejected unfair offers more often compared to control participants." (van ’t Wout et al, 2006, chap. 12), thus suggesting that Responder behavior might be, after all, different in patient with a genetic liability to schizophrenia. Yet another unresolved issue !

Related Posts

Reference
  • Agay, N., Kron, S., Carmel, Z., Mendlovic, S., & Levkovitz, Y. Ultimatum bargaining behavior of people affected by schizophrenia. Psychiatry Research, In Press, Corrected Proof.
  • Hamann, J., Cohen, R., Leucht, S., Busch, R., & Kissling, W. (2007). Shared decision making and long-term outcome in schizophrenia treatment. The Journal of clinical psychiatry, 68(7), 992-7.
  • Heerey, E. A., Robinson, B. M., McMahon, R. P., & Gold, J. M. (2007). Delay discounting in schizophrenia. Cognitive neuropsychiatry, 12(3), 213-21.
  • Hyojin Kim, Daeyeol Lee, Shin, Y., & Jeanyung Chey. (2007). Impaired strategic decision making in schizophrenia. Brain Res.
  • Lee, Y., Kim, Y., Seo, E., Park, O., Jeong, S., Kim, S. H., et al. (2007). Dissociation of emotional decision-making from cognitive decision-making in chronic schizophrenia. Psychiatry research, 152(2-3), 113-20.
  • Mascha van ’t Wout, Ahmet Akdeniz, Rene S. Kahn, Andre Aleman. Vulnerability for schizophrenia and goal-directed behavior: the Ultimatum Game in relatives of patients with schizophrenia. (manuscript), from The nature of emotional abnormalities in schizophrenia: Evidence from patients and high-risk individuals / Mascha van 't Wout, 2006, Proefschrift Universiteit Utrecht.
  • McKay, R., Langdon, R., & Coltheart, M. (2007). Jumping to delusions? Paranoia, probabilistic reasoning, and need for closure. Cognitive neuropsychiatry, 12(4), 362-76.
  • Sevy, S., Burdick, K. E., Visweswaraiah, H., Abdelmessih, S., Lukin, M., Yechiam, E., et al. (2007). Iowa Gambling Task in schizophrenia: A review and new data in patients with schizophrenia and co-occurring cannabis use disorders. Schizophrenia Research, 92(1-3), 74-84.



10/5/07

Ape-onomics: Chimps in the Ultimatum Game and Rationality in the Wild

I recently discussed the experimental study of the Ultimatum Game, and showed that it has been studied in economics, psychology, anthropology, psychophysics and genetics. Now primatologists/evolutionary anthropologists Keith Jensen, Josep Call and Michael Tomasello (the same team that showed that chimpanzees are vengeful but not spiteful see 2007a) had chimpanzees playing the Ultimatum, or more precisely, a mini-ultimatum, where proposers can make only two offers, for instance a fair vs. unfair one, or fair vs. an hyperfair, etc. Chimps had to split grapes. The possibilities were (in x/y pairs, x is the proposer, y, the responder)
  • 8/2 versus 5/5
  • 8/2 versus 2/8
  • 8/2 versus 8/2 (no choice)
  • 8/2 versus 10/0

The experimenters used the following device:



Fig. 1. (from Jensen et al, 2007b) Illustration of the testing environment. The proposer, who makes the first choice, sits to the responder's left. The apparatus, which has two sliding trays connected by a single rope, is outside of the cages. (A) By first sliding a Plexiglas panel (not shown) to access one rope end and by then pulling it, the proposer draws one of the baited trays halfway toward the two subjects. (B) The responder can then pull the attached rod, now within reach, to bring the proposed food tray to the cage mesh so that (C) both subjects can eat from their respective food dishes (clearly separated by a translucent divider)

Results indicate the chimps behave like Homo Economicus:
responders did not reject unfair offers when the proposer had the option of making a fair offer; they accepted almost all nonzero offers; and they reliably rejected only offers of zero (Jensen et al.)


As the authors conclude, "one of humans' closest living relatives behaves according to traditional economic models of self-interest, unlike humans, and t(...) does not share the human sensitivity to fairness."

So Homo Economicus would be a better picture of nature, red in tooth and claw? Yes and no. In another recent paper, Brosnan et al. studied the endowment effect in chimpanzees. The endowment effect is a bias that make us placing a higher value on objects we own relative to objects we do not. Well, chimps do that too. While they usually are indifferent between peanut butter and juice, once they "were given or ‘endowed’ with the peanut butter, almost 80 percent of them chose to keep the peanut butter, rather than exchange it for a juice bar" (from Vanderbilt news). They do not, however, have loss-aversion for non-food goods (rubber-bone dog chew toy and a knotted-rope dog toy). Another related study (Chen et al, 2006) also indicates that capuchin monkeys exhibit loss-aversion.

So there seems to be an incoherence here: chimps are both economically and non-economically rational. But this is only, as the positivists used to say, a pseudo-problem: they tend to comply with standard or 'selfish' economics in social context, but not in individual context. The difference between us and them is truly that we are, by nature, political animals. Our social rationality requires reciprocity, negotiation, exchange, communication, fairness, cooperation, morality, etc., not plain selfishness. Chimps do cooperate and exhibit a slight taste for fairness (see section 1 of this post), but not in human proportions.


Related post:

Reference:



10/4/07

A distributed conception of decision-making

In a previous post, I suggested that there is something wrong with the standard (“cogitative”) conception of decision-making in psychology. In this post, I would like to outline an alternative conception, what we might call the “distributed conception”.

A close look at robotics suggests that decision-making should not be construed as a deliberative process. Deliberative control (Mataric, 1997) or sense-model-plan-act (SMPA) architectures have been unsuccessful in controlling autonomous robots (Brooks, 1999; Pfeifer & Scheier, 1999). In these architectures, (e.g. Nilsson, 1984), “what to do?” was represented as a logical problem. Sensors or cameras represented the perceptible environment while internal processors converted sensory inputs in first-order predicate calculus. From this explicit model of its environment, the robot’s central planner transformed a symbolic description of the world into a sequence of actions (see Hu & Brady, 1996, for a survey). Decision-making was taken in charge by an expert system or a similar device. Thus the flow of information is one-way only: sensors → model → planner → effectors.

SMPA architectures could be effective, but only in environment carefully designed for the robot. The colors, lightning and objects disposition were optimally configured for simplifying perception and movement. Brooks describes how the rooms where autonomous robots evolve were optimally configured:

The walls were of a uniform color and carefully lighted, with dark rubber baseboards, making clear boundaries with the lighter colored floor. (…) The blocks and wedges were painted different colors on different planar surfaces. (….) Blocks and wedges were relatively rare in the environment, eliminating problems due to partial obscurations (Brooks, 1999, p. 62)

Thus the cogitative conception of decision-making, and its SMPA implementations, had to be abandoned. If it did not work for mobile robots, it is justified to argue that for cognitive agents in general the cogitative conception also has to be abandoned. Agents do not make decisions simply by central planning and explicit models manipulations, but by coordinating multiple sensorimotor mechanisms. In order to design robots able to imitate people, for instance, roboticists build systems that control their behavior through multiple partial models. Mataric (2002) robots, for instance, learn to imitate by coordinating the following modules:

  1. a selective attentional mechanisms that extract salient visual information (other agent's face, for instance)
  2. a sensorimotor mapping system that transforms visual input in motor program
  3. a repertoire of motor primitives
  4. a classification-based learning mechanism that learns from visuo-motor mappings

Neuroeconomics also suggests another--similar--avenue: there is no brain area, circuit or mechanisms specialized in decision-making, but rather a collection of neural modules. Certain area specializes in visual-saccadic decision-making (Platt & Glimcher, 1999). Social neuroeconomics indicates that decision in experimental games are mainly affective computations: choice behavior in these games is reliabely correlated to neural activations of social emotions such as the ‘warm glow’ of cooperation (Rilling et al., 2002), the ‘sweet taste’ of revenge (de Quervain et al., 2004) or the ‘moral disgust’ of unfairness (Sanfey et al., 2003). Subjects without affective experiences or affective anticipations are unable to make rational decision, as Damasio and his colleagues discovered. Damasio found that subjects with lesions in the ventromedial prefrontal cortex (vmPFC, a brain area above the eye sockets) had huge problems in coping with everyday tasks (Damasio, 1994). They were unable to plan meetings; they lose their money, family or social status. They were, however, completely functional in reasoning or problem-solving task. Moreover, Damasio and its collaborators found that these subjects had lower affective reactions. They did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. The researchers concluded that these subjects were unable to use emotions to aid in decision-making, a hypothesis that also implies that in normal subjects, emotions do aid in decision-making.

Consequently, the “Distributed Conception of Decision-making” suggest that making is:

Sensorimotor: the mechanisms for decision-making are not only and not necessarily intellectual, high-level and explicit. Decision-making is the whole organism’s sensorimotor control.
Situated: a decision is not a step-by-step internal computation, but also a continuous and dynamic adjustment between the agent and its environment that develop in the whole lifespan. Decision-making is always physically and (most of the time) socially situated: ecological situatedness is both a constraint on, and a set of informational resources that helps agent to cope with, decision-making.
Psychology should do more than documenting our inability to follow Bayesian reasoning in paper-and-pen experiment, but study our sensorimotor situated control capacities. Decision-making should not be a secondary topics for psychology but, following Gintis “the central organizing principle of psychology” (Gintis, 2007, p. 1). Decision-making is more than an activity we consciously engage in occasionally : it is rather the very condition of existence (as Herrnstein, said “all behaviour is choice” (Herrnstein, 1961).

Therefore, deciding should not be studied like a separate topic (e.g. perception), or an occasional activity (e.g. chess-playing) or a high-level competence (e.g. logical inference), but with robotic control. A complete, explicit model of the environment, manipulated by a central planner, is not useful for robots. New Robotics (Brooks, 1999) revealed that effective and efficient decision-making is achieved through multiple partial models updated in real-time. There is no need to integrate models in a unified representations or a common code: distributed architectures, were many processes runs in parallel, achieve better results. As Barsalou et al. (2007) argue, Cognition is coordinated non-cognition; similarly, decision-making is coordinated non-decision-making.

If decision-making is the central organizing principle of psychology, all the branches of psychology could be understood as research fields that investigate different aspects of decision-making. Abnormal psychology explains how deficient mechanisms impair decision-making. Behavioral psychology focuses on choice behavior and behavioral regularities. Cognitive psychology describes the mechanisms of valuation, goal representation, preferences and how they contribute to decision-making. Comparative psychology analyzes the variations in neural, behavioral and cognitive processes among different clades. Developmental psychology establishes the evolution of decision-making mechanisms in the lifespan. Neuropsychology identify the neural substrates of these mechanisms. Personality psychology explains interindividual variations in decision-making, our various decision-making “profiles”. Social psychology can shed lights on social decision-making, that is, either collective decision-making (when groups or institutions make decisions) or individual decision-making in social context. Finally, we could also add environmental psychology (how agents use their environment to simplify their decisions) and evolutionary psychology (how decision-making mechanisms are – or are not – adaptations).

Related posts:


References
  • Barsalou, Breazeal, & Smith. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8(2), 79-91.
  • Brooks, R. A. (1999). Cambrian Intelligence : The Early History of the New Ai. Cambridge, Mass.: MIT Press.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Damasio, A. R. (1994). Descartes' Error : Emotion, Reason, and the Human Brain. New York: Putnam.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., & Fehr, E. (2004). The Neural Basis of Altruistic Punishment. Science, 305(5688), 1254-1258.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Herrnstein, R. J. (1961). Relative and Absolute Strength of Response as a Function of Frequency of Reinforcement. J Exp Anal Behav., 4(4), 267–272.
  • Hu, H., & Brady, M. (1996). A Parallel Processing Architecture for Sensor-Based Control of Intelligent Mobile Robots. Robotics and Autonomous Systems, 17(4), 235-257.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Mataric, M. J. (1997). Behaviour-Based Control: Examples from Navigation, Learning, and Group Behaviour. Journal of Experimental & Theoretical Artificial Intelligence, 9(2 - 3), 323-336.
  • Mataric, M. J. (2002). Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. Imitation in Animals and Artifacts, 391–422.
  • Nilsson, N. J. (1984). Shakey the Robot: SRI International.
  • Pfeifer, R., & Scheier, C. (1999). Understanding Intelligence. Cambridge, Mass.: MIT Press.
  • Platt, M. L., & Glimcher, P. W. (1999). Neural Correlates of Decision Variables in Parietal Cortex. Nature, 400(6741), 238.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.



10/3/07

What is Wrong with the Psychology of Decision-Making?

In psychology textbooks, decision-making figures most of the time in the last chapters. These chapters usually acknowledge the failure of the Homo economicus model and propose to understand human irrationality as the product of heuristics and biases, that may be rational under certain environmental conditions. In a recent article, H.A. Gintis documents this neglect:

(…) a widely used text of graduate- level readings in cognitive psychology, (Sternberg & Wagner, 1999) devotes the ninth of eleven chapters to "Reasoning, Judgment, and Decision Making," offering two papers, the first of which shows that human subjects generally fail simple logical inference tasks, and the second shows that human subjects are irrationally swayed by the way a problem is verbally "framed" by the experimenter. A leading undergraduate cognitive psychology text (Goldstein, 2005) placed "Reasoning and Decision Making" the last of twelve chapters. This includes one paragraph describing the rational actor model, followed by many pages purporting to explain why it is wrong. (…) in a leading behavioral psychology text (Mazur, 2002), choice is covered in the last of fourteen chapters, and is limited to a review of the literature on choice between concurrent reinforcement schedules and the capacity to defer gratification (Gintis, 2007, pp. 1-2)
Why? The standard conception of decision-making in psychology can be summarized by two claims, one conceptual, one empirical. Conceptually, the standard conception holds that decision-making is a separate topic: it is one of the subjects that psychologists may study, together with categorization, inference, perception, emotion, personality, etc. As Gintis showed, decision-making has its own chapters (usually the lasts) in psychology textbooks. On the empirical side, the standard conception construes decision-making is an explicit deliberative process, such as reasoning. For instance, in a special edition of Cognition on decision-making (volume 49, issues 1-2, Pages 1-187), one finds the following claims:

Reasoning and decision making are high-level cognitive skills […]
(Johnson-Laird & Shafir, 1993, p. 1)

Decisions . . . are often reached by focusing on reasons that justify the selection of one option over another

(Shafir et al., 1993, p. 34)

Hence decision-making is studied mostly by multiple-choice tests using the traditional paper and pen method, which clearly suggests that deciding is considered as an explicit process. Psychological research thus assumes that the subjects’ competence in probabilistic reasoning as revealed by these tests is a good description of their decision-making capacities.

These two claims, however, are not unrelated. Since decision-making is a central, high-level faculty that stands between perception and action, it can be studied in isolation. They constitute a coherent whole, something that philosophers of science would call a paradigm. This paradigm is built around a particular view of decision-making (and more generally, cognition) that could be called “cogitative”:

Perception is commonly cast as a process by which we receive information from the world. Cognition then comprises intelligent processes defined over some inner rendition of such information. Intentional action is glossed as the carrying out of commands that constitute the output of a cogitative, central system. (Clark, 1997, p. 51)


In another post, I'll present an alternative to the Cogitative conception, based on research in neuroeconomics, robotics and biology.

You can find Gintis's article on his page, together with other great papers.

References

  • Clark, A. (1997). Being There : Putting Brain, Body, and World Together Again. Cambridge, Mass.: MIT Press.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Goldstein, E. B. (2005). Cognitive Psychology : Connecting Mind, Research, and Everyday Experience. Australia Belmont, CA: Thomson/Wadsworth.
  • Johnson-Laird, P. N., & Shafir, E. (1993). The Interaction between Reasoning and Decision Making: An Introduction. Cognition, 49(1-2), 1-9.
  • Mazur, J. E. (2002). Learning and Behavior (5th ed.). Upper Saddle River, N.J.: Prentice Hall.
  • Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-Based Choice. Cognition, 49(1-2), 11-36.
  • Sternberg, R. J., & Wagner, R. K. (1999). Readings in Cognitive Psychology. Fort Worth, TX: Harcourt Brace College Publishers.



10/2/07

The Ultimatum Game: Economics, Psychology, Anthroplogy, Psychophysics, Neuroscience and now, Genetics

The Ultimatum Game (Güth et al., 1982) is a one-shot bargaining game. A first player (the Proposer) offers a fraction of a money amount; the second player (the Responder) may either accept or reject the proposal. If the Responder accepts, she keeps the offered amount while the Proposer keeps the difference. If she rejects it, both players get nothing. According to Game Theory, the subgame perfect equilibrium (a variety of Nash equilibrium for dynamic game) is the following set of strategies:

  • The Proposer should offer the smallest possible amount (in order to keep as much money as possible)
  • The Responder should accept any amount (because a small amount should be better than nothing)

The UG has been experimentally tested in a variety of contexts, where different parameters of the game where modified: age, sex, the amount of money, the degree of anonymity, the length of the game, etc. (Camerer & Thaler, 1995; Henrich et al., 2004; Oosterbeek et al., 2004; Samuelson, 2005). The results show a robust tendency: the game-theoretic strategy is almost never followed. People tend to anticipate and make “fair” offers. While Proposers offer about 50% of the "pie", Responders tend to accept these offers while rejecting most of the “unfair” offers (less than 20% of M). Some will even reject “too fair” offers (Bahry & Wilson, 2006)

Henrich et al (2005) studied Ultimatum behavior in 15 different small-scale societies. They found cultural variation, but these variations exhibit a constant pattern of reciprocity: differences are greater between groups than between individuals in the same group. Subjects are closer to equilibrium strategies in 4 situations: when playing against a computer, (Blount, 1995; Rilling et al., 2004; Sanfey et al., 2003; van 't Wout et al., 2006) when players are groups (Robert & Carnevale, 1997), autists (Hill & Sally, 2002) or people trained in decision and game theory, like economists and economics students (Carter & Irons, 1991; Frank et al., 1993; Kahneman et al., 1986).

Neuroeconomics shows that Ultimatum decision-making relies mainly on three areas: the anterior insula (AI), often associated with negative emotional states like disgust or anger, the dorsolateral prefrontal cortex (DLPFC), associated with cognitive control, attention, goals maintenance, and the anterior cingulate cortex (ACC), associated with cognitive conflict, motivation, error detection and emotional modulation (Sanfey et al., 2003). All three areas denote a stronger activity when the Responder faces an unfair offer. When offers are unfair, the brain faces a dilemma: punish that unfair player, or get a little money (which is better than nothing) ? The conflict involves firstly AI. This area is more active when unfair offers are proposed, and even more when the Proposer (compared to when it is a computer.) It is also correlated with the degree of unfairness (Sanfey et al., 2003: 1756) and with the decision to reject unfair offers. Skin conductance experiments show that unfair offers and their rejection are associated with greater skin conductance (van 't Wout et al., 2006). DLPFC activity remains relatively constant across unfair offers. When AI activation is greater than DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than AI.

A new study by Wallace, Cesarini, Lichtenstein & Johannesson now investigate the influence of genes on ultimatum behavior. The compared monozigotic (same set of genes) and dizigotic ("non-identical") twins. Their statistical analysis identified which of these different factors explains the variation in behavior: genetic effects, common environmental effects, and nonshared environmental effects. They found that genetics account for 42% of the variation in responder behavior: i.e., identical twins are more likely to behave similarly in their reaction to Ultimatum proposition. Thus, sensibility to fairness might have a genetic component, an idea that proponent of the Strong Reciprocity Hypothesis put forth but did not backed with genetic studies. Yet it does not mean that that fairness preferences followed the evolutionary path advocated by SRH proponents:

Although our results are consistent with an evolutionary origin for fairness preferences, it is important to remember that heritability measures the genetically determined variation around some average behavior. Hence, it does not provide us with any direct evidence with regard to the evolutionary dynamics that brought it about (Wallace et al., p. 15633)
Of course the big question remains : how is it that genes influence the development of neural structure that control fairness preference and reciprocal behavior? A question for evo-devo-neuro-psycho-economics !

Related posts:
Links:
  • Wallace, B., Cesarini, D., Lichtenstein, P., & Johannesson, M. (2007). Heritability of ultimatum game responder behavior. Proceedings of the National Academy of Sciences, 0706642104. [Open Access Article]

  • Genes influence people's choices in economics game (MIT news)
    Except:
    "Compared to common environmental influences such as upbringing, genetic influences appear to be a much more important source of variation in how people play the game," Cesarini said.
    "This raises the intriguing possibility that many of our preferences and personal economic choices are subject to substantial genetic influence," said lead author Bjorn Wallace of the Stockholm School of Economics, who conceived of the study."


    References:

    • Bahry, D. L., & Wilson, R. K. (2006). Confusion or Fairness in the Field? Rejections in the Ultimatum Game under the Strategy Method. Journal of Economic Behavior & Organization, 60(1), 37-54.
    • Blount, S. (1995). When Social Outcomes Aren't Fair: The Effect of Causal Attributions on Preferences. Organizational Behavior and Human Decision Processes, 63(2), 131-144.
    • Camerer, C., & Thaler, R. H. (1995). Anomalies: Ultimatums, Dictators and Manners. The Journal of Economic Perspectives, 9(2), 209-219.
    • Carter, J. R., & Irons, M. D. (1991). Are Economists Different, and If So, Why? The Journal of Economic Perspectives, 5(2), 171-177.
    • Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does Studying Economics Inhibit Cooperation? The Journal of Economic Perspectives, 7(2), 159-171.
    • Güth, W., Schmittberger, R., & Schwarze, B. (1982). An Experimental Analysis of Ultimatum Bargaining. Journal of Economic Behavior and Organization, 3(4), 367-388.
    • Henrich, J., Boyd, R., Bowles, S., Camerer, C., E., F., & Gintis, H. (2004). Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies: Oxford University Press.
    • Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Henrich, N. S., Hill, K., Gil-White, F., Gurven, M., Marlowe, F. W., Patton, J. Q., & Tracer, D. (2005). "Economic Man" In Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies. Behavioral and Brain Sciences, 28(6), 795-815; discussion 815-755.
    • Hill, E., & Sally, D. (2002). Dilemmas and Bargains: Theory of Mind, Cooperation and Fairness. working paper, University College, London.
    • Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1986). Fairness and the Assumptions of Economics. The Journal of Business, 59(4), S285-S300.
    • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
    • Rilling, J. K., Sanfey, A. G., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2004). The Neural Correlates of Theory of Mind within Interpersonal Interactions. Neuroimage, 22(4), 1694-1703.
    • Robert, C., & Carnevale, P. J. (1997). Group Choice in Ultimatum Bargaining. Organizational Behavior and Human Decision Processes, 72(2), 256-279.
    • Samuelson, L. (2005). Economic Theory and Experimental Economics. Journal of Economic Literature, 43, 65-107.
    • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.
    • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective State and Decision-Making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
    • Wallace, B., Cesarini, D., Lichtenstein, P., & Johannesson, M. (2007). Heritability of ultimatum game responder behavior. Proceedings of the National Academy of Sciences, 0706642104.



10/1/07

The Rationality of Soccer Goalkeepers


(source: Flickr)


A study in the Journal of Economic Psychology analyzes soccer goalkeepers penalty decision-making: jump left, jump right, or stay in the center. Bar-Eli et al.'s study of

(... ) 286 penalty kicks in top leagues and championships worldwide shows that given the probability distribution of kick direction, the optimal strategy for goalkeepers is to stay in the goal's center.
The probability of stopping a penalty kick are the following:


Why do goalkeeper jump left of right, when the optimal nash-equilibirum strategy is is to stay in the goal's center? Because jumping is the norm, and thus

(...) a goal scored yields worse feelings for the goalkeeper following inaction (staying in the center) than following action (jumping), leading to a bias for action.
This study illustrates the tension between internal(subjective) and external (objective) rationality discussed in my last post: statistically speaking, as a rule for winning games, to jump is (externally) suboptimal; but given the social norm and the associated emotional feeling, jumping is (internally) rational. Note also how modeling the game is important for normative issue: two other studies, (Palacios-Huerta, 2003; Chiappori et al., 2002) concluded that goalkeepers play a rational strategy, but they supposed that shooter and goalkeeper had only two options, (kick/jump) left or right. Bar-Eli et al. added (kick/stay) in the center.


Reference




9/29/07

How is (internal) irrationality possible?

Much unhappiness (...) has less to do with not getting what we want, and more to do with not wanting what we like.(Gilbert & Wilson, 2000)

Yes, we should make choices by multiplying probabilities and utilities, but how can we possibly do this if we can’t estimate those utilities beforehand? (Gilbert, 2006)

When we preview the future and prefeel its consequences, we are soliciting advice from our ancestors. This method is ingenious but imperfect. (Gilbert, et al. 2007)


Although we easily and intuitively assess each other’s behavior, speech, or character as irrational, providing a non-trivial account of irrationality might be tricky (something we philosophers like to deal with!) Let’s distinguish internal and external assessment of rationality: an internal (or subjective) assessment of rationality is an evaluation of the coherence of intentions, actions and plans. An external (or objective) assessment of rationality is an evaluation of the effectiveness of a rule or procedure. It assesses the optimality of a rule for achieving a certain goal. An action can be rational from the first perspective but not from the second one, and vice versa. Hence subjects’ poor performance in probabilistic reasoning can be internally rational without being externally rational: the Gambler’s fallacy is and will always be a fallacy: it is possible, however, that fallacious reasoners follow rational rules, maximizing an unorthodox utility function. Consequently, it is easy to understand how one can be externally irrational, but less easy to make sense of internal irrationality.

An interesting suggestion comes from hedonic psychology, and mostly Dan Gilbert’s research: irrationality is possible if agents fail to want things they like. Gilbert research focuses on Affective Forecasting, i.e., the forecasting of one's affect (emotional state) in the future (Gilbert, 2006; Wilson & Gilbert, 2003): anticipating the affective valence, intensity, duration and nature of specific emotions. Just like Tversky and Kahneman studied biases in probabilistic reasoning, he and his collaborator study biases in hedonistic reasoning.

In many cases, for instance, people do not like or dislike an event as much as they thought they would. They want things that do not promote welfare, and not want things that would promote their welfare. This what Gilbert call “miswanting”. We miswant, explain Gilbert, because of affective forecasting biases.

Take for instance impact biases: subject overestimate the length (durability bias) or intensity (intensity bias) of future emotive states (Gilbert et al., 1998):

“Research suggests that people routinely overestimate the emotional impact of negative events ranging from professional failures and romantic breakups to electoral losses, sports defeats, and medical setbacks”. (Gilbert et al., 2004).

They also underestimate the emotional impact of positive events such as winning a lottery (Brickman et al., 1978): newly rich lottery winners rated their happiness at this stage of their life as only 4.0, (on a 6-point scale, 0 to 5) which does not differ significantly from the rating of the control subjects. Also surprising to many people is the fact that paraplegics and quadriplegics rated their lives at 3.0, which is above the midpoint of the scale (2.5). In another study, Boyd et al., (1990) solicited the utility of life with a colostomy from several different groups: patients who had rectal cancer and who had been treated by radiation, patients who had rectal cancer and who had been treated by a colostomy, physicians who had experience treating patients with gastrointestinal malignancies, and two groups of healthy individuals. The patients with a colostomy and the physicians rated life with a colostomy significantly higher than did the other three groups. Another bias is the Empathy gap: humans fail to empathize or predict correctly how they will feel in the future, i.e. what kind of emotional state they will be in. Sometimes, we fail to take into account how much our psychological “immune system” will ameliorate reactions to negative events. People do not realize how they will rationalize negative outcomes once they occur (the Immune neglect). People also often mispredict regret (Gilbert et al., 2004b):
the top six biggest regrets in life center on (in descending order) education, career, romance, parenting, the self, and leisure. (…) people's biggest regrets are a reflection of where in life they see their largest opportunities; that is, where they see tangible prospects for change, growth, and renewal. (Roese & Summerville, 2005).
So a perfectly rational agent, at time t, would choose to do X at t+1 given what she expects her future valuation of X to be. As studies showed, however, we are bad predictors of our own future subjective appreciation. The person we are at t+1 may not totally agree with the person we were at t. So, in one sense, this gives a non-trivial meaning to internal irrationality: since our affective forecasting competence is biased, we may not always choose what we like or like what we choose. Hedonic psychology might have identified incoherence between intentions, actions and plans, an internal failure in our practical rationality.

Recommended reading:



References

  • Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., & Martin-Skurski, M. E. (2006). Neurobiological Substrates of Dread. Science, 312(5774), 754-758.
  • Boyd, N. F., Sutherland, H. J., Heasman, K. Z., Tritchler, D. L., & Cummings, B. J. (1990). Whose Utilities for Decision Analysis? Med Decis Making, 10(1), 58-67.
  • Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery Winners and Accident Victims: Is Happiness Relative? J Pers Soc Psychol, 36(8), 917-927.
  • Gilbert, D. T. (2006). Stumbling on Happiness (1st ed.). New York: A.A. Knopf.
  • Gilbert, D. T., & Ebert, J. E. J. (2002). Decisions and Revisions: The Affective Forecasting of Changeable Outcomes. Journal of Personality and Social Psychology, 82(4), 503–514.
  • Gilbert, D. T., Lieberman, M. D., Morewedge, C. K., & Wilson, T. D. (2004a). The Peculiar Longevity of Things Not So Bad. Psychological Science, 15(1), 14-19.
  • Gilbert, D. T., Morewedge, C. K., Risen, J. L., & Wilson, T. D. (2004b). Looking Forward to Looking Backward. The Misprediction of Regret. Psychological Science, 15(5), 346-350.
  • Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (1998). Immune Neglect: A Source of Durability Bias in Affective Forecasting. J Pers Soc Psychol, 75(3), 617-638.
  • Gilbert, D. T., & Wilson, T. D. (2000). Miswanting: Some Problems in the Forecasting of Future Affective States. Feeling and thinking: The role of affect in social cognition, 178–197.
  • Kermer, D. A., Driver-Linn, E., Wilson, T. D., & Gilbert, D. T. (2006). Loss Aversion Is an Affective Forecasting Error. Psychological Science, 17(8), 649-653.
  • Loomes, G., & Sugden, R. (1982). Regret Theory: An Alternative Theory of Rational Choice under Uncertainty. The Economic Journal, 92(368), 805-824.
  • Roese, N. J., & Summerville, A. (2005). What We Regret Most... And Why. Personality and Social Psychology Bulletin, 31(9), 1273.
  • Seidl, C. (2002). Preference Reversal. Journal of Economic Surveys, 16(5), 621-655.
  • Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics. Journal of Socio-Economics, 31(4), 329-342.
  • Srivastava, A., Locke, E. A., & Bartol, K. M. (2001). Money and Subjective Well-Being: It's Not the Money, It's the Motives. J Pers Soc Psychol, 80(6), 959-971.
  • Thaler, A., & Tversky, R. H. (1990). Anomalies: Preference Reversals. Journal of Economic Perspectives, 4, 201-211.
  • Wilson, T. D., & Gilbert, D. T. (2003). Affective Forecasting. Advances in experimental social psychology, 35, 345-411.