Natural Rationality | decision-making in the economy of nature
Showing posts with label decision-making. Show all posts
Showing posts with label decision-making. Show all posts

5/21/08

Voting and low-information rationality

In this year of American Presidential election, I noticed that many political analyst referred a book by political scientist Samuel L Popkin, The Reasoning Voter. One of his point is that voters are not completely irrational, but rather behave as decision-makers under certainty. They use "low-information signals" such as appearances, character traits or "whether you know how to roll a bowling ball or wear an American-flag pin" (from Time's Joe Klein column). In other words, political heuristics.



Here is a more detailed summary from Wikisummary.

Low information rationality

Popkin's analysis is based on one main premise: voters use low information rationality gained in their daily lives, through the media and through personal interactions, to evaluate candidates and facilitate electoral choices.

Political "Knowledge": Despite a more educated electorate, knowledge of civics has not increased significantly in forty years. According to Popkin, theorists who argue that political competence could be measured by knowledge of "civics book" knowledge and names of specific bills (i.e. the Michigan studies) have missed the larger point that voters do manage to gain an understanding of where candidates stand on important issues. He argues that education has not changed how people think, but it does allow us to better interpret and connect different cues.

Information as a By-Product: Popkin argues that most of the information voters learn about politics is picked up as a by-product of activities they pursue as a part of daily life (homeowners learn about interest rates, shoppers learn about prices and inflation etc.--thus, people know how the economy is doing). Media helps to explain what politicians are doing and the relevance of those actions for individuals, and campaigns help to clarify the issues. Voters develop affinity towards like-minded opinion leaders in media and in personal interactions.

Media and Friends: Interpersonal communication is seen as a way of developing assessments of parties and candidates. Information received from the media is discussed with friends and helps to create opinions. While voters do care about issue proximity, they also focus on candidate competency and sincerity and rely heavily on cues to make these evaluations.


Other related post:

A Neuropolitic look at political psychology





4/28/08

Dan Ariely on Understanding the Logic Behind Illogical Decisions

Found on the American Association Management website: a podcast on behavioral economics


Dan Ariely on Understanding the Logic Behind Illogical Decisions

An MIT professor discovers that people tend to behave irrationally in a predictable fashion.

April 18, 2008 / Podcast # 08-16

Dan Ariely

Irrational behavior is a part of human nature, but as MIT professor Dan Ariely has discovered in 20 years of researching behavioral economics, people tend to behave irrationally in a predictable fashion. Drawing on psychology and economics, behavioral economics can show us why cautious people make poor decisions about sex when aroused, why patients get greater relief from a more expensive drug over its cheaper counterpart and why honest people may steal office supplies or communal food, but not money. According to Ariely’s new book Predictably Irrational, our understanding of economics, now based on the assumption of a rational subject, should, in fact, be based on our systematic, unsurprising irrationality. Ariely argues that greater understanding of previously ignored or misunderstood forces (emotions, relativity and social norms) that influence our economic behavior brings a variety of opportunities for reexamining individual motivation and consumer choice, as well as economic and educational policy.



4/20/08

a trader's morning testosterone level predicts his day's profitability

according to a new study plublished this week in PNAS:


J. M. Coates and J. Herbert

Endogenous steroids and financial risk taking on a London trading floor
http://dx.doi.org/10.1073/pnas.0704025105


Little is known about the role of the endocrine system in financial risk taking. Here, we report the findings of a study in which we sampled, under real working conditions, endogenous steroids from a group of male traders in the City of London. We found that a trader's morning testosterone level predicts his day's profitability. We also found that a trader's cortisol rises with both the variance of his trading results and the volatility of the market. Our results suggest that higher testosterone may contribute to economic return, whereas cortisol is increased by risk. Our results point to a further possibility: testosterone and cortisol are known to have cognitive and behavioral effects, so if the acutely elevated steroids we observed were to persist or increase as volatility rises, they may shift risk preferences and even affect a trader's ability to engage in rational choice.

See a good summary in Science.



3/26/08

Nature Neuroscience Special Issue about Decision Neuroscience

The last issue of Nature Neuroscience features 4 great papers on the neuroscience of decision-making:

  • Choice, uncertainty and value in prefrontal and cingulate cortex
    Matthew F S Rushworth and Timothy E J Behrens
  • Risky business: the neuroeconomics of decision making under uncertainty
    Michael L Platt and Scott A Huettel
  • Game theory and neural basis of social decision making
    Daeyeol Lee
  • Modulators of decision making
    Kenji Doya
Enjoy!



3/11/08

Why Neuroeconomics Needs a Concept of (Natural) Rationality

ResearchBlogging.orgNeuroeconomists (more than “decision neuroscientists”) often report their finding as strong evidence against the rationality of decision-makers. In the case of cooperation it is often claimed that emotions motivate cooperation since neural activity elicited by cooperation overlaps with neural activity elicited by hedonic rewards (Fehr & Camerer, 2007). Also, when subjects have to choose whether or not they would purchase a product, desirable products cause activation in the nucleus accumbens (associated with anticipation of pleasure). However, if the price is seen as exaggerated, activity is detected in the insula (involved in disgust and fear; Knutson 2007).

The accumulation of evidence about the engagement of affective areas in decison-making is undisputable, and seems to make a strong case against a once pervasive “rationalist” vision of decision-making in cognitive science and economics. This is not, however, a definitive argument for emotivism (we choose with our "gut feelings") and irrationalism. For at least three reasons (methodological, empirical and conceptual), these findings should not be seen as supporting an emotivist account.

First, characterizing a brain area as “affective” or “emotional” is misleading. There is no clear distinction, in the brain, between affective and cognitive areas. For instance, the anterior insula is involved in disgust, but also in disbelief (Harris et al., 2007). A high-level task such as cognitive control (e.g. holding items in working memory in a goal-oriented task) requires both “affective” and “cognitive” areas (Pessoa, 2008). The affective/cognitive distinction is a folk-psychological one, not a reflection of brain anatomy and connectivity. There is a certain degree of specialization, but generally speaking any task recruits a wide arrays of areas, and each area is redeployed in many tasks. In complex being like us, so-called “affective” areas are never purely affective: they always contribute to higher-level cognition, such as logical reasoning (Houde & Tzourio-Mazoyer, 2003). Similarly, while the amygdala has been often described as a “fear center”, its function is much more complex, as it modulates emotional information, react to unexpected stimuli and is heavily recruited in visual attention, a “cognitive” function. It is therefore wrong to consider “affective” areas as small emotional agents that are happy or sad and make us happy of sad. Instead of employing folk-psychological categories, their functional contribution should be understood in computational terms: how they process signals, how information is routed between areas and how they affect behavior and thought.

Second, even if there are affective areas, they are always complemented or supplemented by “cognitive” ones: the dorsolateral prefrontal cortex (DLPFC) for instance (involved in cognitive control and goal maintenance), is recruited in almost all decision-making task, and has been shown to be involved in norm-compliant behavior and purchasing decisions. In the ultimatum game, beside the anterior insula, two other areas are recruited: the DLPFC and the anterior cingulate cortex (ACC), involved in cognitive conflict and emotional modulation. Explainiations of ultimatum decisions spell out neural information-processing mechanisms, not “emotions”.

Check for instance the neural circuitry involved in cognitive control: you would think it is only prefrontal areas, but as it turns out, "cognitive" and "affective" area sare required for this competence:


[Legend: This extended control circuit contains traditional control areas, such as the anterior cingulate cortex (ACC) and the lateral prefrontal cortex (LPFC), in addition to other areas commonly linked to affect (amygdala) and motivation (nucleus accumbens). Diffuse, modulatory effects are shown in green and originate from dopamine-rich neurons from the ventral tegmental area (VTA). The circuit highlights the cognitive–affective nature of executive control, in contrast to more purely cognitive-control proposals. Several connections are not shown to simplify the diagram. Line thickness indicates approximate connection strength. OFC, orbitofrontal cortex.From Pessoa, 2008]

As Michael Anderson pointed out in a series of papers (2007a and b, among others), there is many-to-many mapping between brain functions and cognitive functions. So the concept of "emotional areas" should be banned from neuroeconomics vocabulary before it is too late.

Third, a point that has been neglected by many research about decision-making neural activation of a particular brain area is always explanatory with regard to its contribution in understanding personal-level properties. If we learn that the anterior insula react to unfair offers, we are not singling out the function of this area, but explaining how the person’s decision is responsive to a particular type of valuation. The basic unit of analysis of decisions is not neurons, but judgments. We may study sub-judgmental (e.g. neural) mechanisms and how they contribute to judgment formation; or we may study supra-judgmental mechanisms (e.g. reasoning) and how they articulate judgments. Emotions, as long as they are understood as affective reactions, are not judgments: they either contribute to judgments or are construed as judgments. In both case, the category “emotions” seems superfluous for explaining the nature of the judgment itself. Thus, if judgments are the basic unit of analysis, brain areas are explanatory insofar as they make explicit how individuals arrive at a certain judgment, how it is implemented, etc: what kind of neural computations are carried out? Take, for example, cooperation in the prisoner's dilemma. Imaging studies show that when high-psychopathy and low-psychopathy subjects choose to cooperate, different neural activity is observed: the former use more prefrontal areas than the latter, indicating that cooperation is more efforful (see this post). This is instructive: we learn something about the information- processing not about "emotions" or "reason".

In the end, we want to know how these mechanisms fix beliefs, desires and intentions: neuroeconomics can be informative as long as it aims at deciphering human natural rationality.


References
  • Anderson, M. L. (2007a). Evolution of Cognitive Function Via Redeployment of Brain Areas. Neuroscientist, 13(1), 13-21.
  • Anderson, M. L. (2007b). The Massive Redeployment Hypothesis and the Functional Topography of the Brain. Philosophical Psychology, 20(2), 143 - 174.
  • Fehr, E., & Camerer, C. F. (2007). Social Neuroeconomics: The Neural Circuitry of Social Preferences. Trends Cogn Sci.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology,
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Spitzer, M., Fischbacher, U., Herrnberger, B., Gron, G., & Fehr, E. (2007). The Neural Signature of Social Norm Compliance. Neuron, 56(1), 185-196.




The bounded rationality of self-control

ResearchBlogging.org Rogue traders such as Jérôme Kerviel or Nick Leeson engage in criminal, fraudulent and high-risk financial activities that often result in huge losses ($7 billion for Kerviel) or financial catastrophe (the bankruptcy of the 233 years-old bank who employed Leeson). Why would anyone do that?

A popular answer is that money is like a drug, and that Kerviel had behaved "like a financial drug addict" . And truly, it is. We crave money and feel its rewarding properties when our subcortical areas light up as if we were having sex or eating Port-Royal Cupcakes (just reading the list of ingredients of the latter is enough for me!). Money hits our sweet spot, and elicits activity in emotional and emotion-related areas. Thus rogue traders are like cocaine addicts, unable to stop the never-ending search for the ultimate buzz.

This is fine, but incomplete and partly misleading. We all have temptations, drives, desires, emotions, addictions, etc., and some of us experience them more vividly. The interesting question is not how intense the money thrill is, but how weak is self-control can be. By “self-control”, I mean the vetoing capacity we have: when we resist eating fat food, smoking (oh, just one, I swear) another cigarette, insulting that person that laugh at us, flirting with that cute colleague of yours, etc. Living in society requires that we regulate our behavior and—more often than not—doing what we should do instead of what we want to do. It seems that rogue traders, like addicts and criminals, lacks a certain capacity to implement self-control and normative regulation.

Traditional accounts of self-control construe this capacity as a cognitive, rational faculty. New developments in psychology suggest that it is more like a muscle than a cognitive process. If self-control is a cognitive process, activating it should speed up further self-control since it becomes highly accessible; priming, for instance, speeds up recognition. To the contrary, if self-control is a limited resource, using it should impair or slow down further self-control (since part of the resource will be spent the first time). Many experiments support the second options: self-control and inhibitory control are limited resources, a phenomenon Roy Baumeister and his colleagues called ego depletion: the

temporary reduction in the self's capacity or willingness to engage in volitional action (including controlling the environment, controlling the self, making choices, and initiating action) caused by prior exercise of volition. (Baumeister et al., 1998, p. 1253)

For instance, subjects who have to suppress their emotions while watching an upsetting movie perform worse on the Stroop task (Inzlicht & Gutsell, 2007). EEG indicates less activity in the ACC in subjects who had to inhibit their affective reactions. Subjects who had to reluctantly eat radishes abandon problem-solving earlier than subject who had chocolate willingly. Taking responsibility for and producing voluntarily a counterattitudinal speech (a speech that expresses an opinion contrary to its locutor’s) also reduced perseverance; producing the speech without taking responsibility did not) (Baumeister et al., 1998).

Self-control literally requires energy. Subjects asked to suppress facial reactions (e.g. smiles) when watching a movie have lower blood glucose levels, suggesting higher energy consumption. Control subjects (free to react how they want) had the same blood glucose levels before and after the movie, and performed better than control subjects on a Stroop Task. Restoring glucose levels with a sugar-sweetened lemonade (instead of artificially-sweetened beverages, without glucose) also increases performance. Self-control failures happen more often in situation where blood glucose levels is low. In a literature review, Gailliot et al show that lack of cognitive, behavioral and emotional control is systematically associated with hypoglycemia or hypoglycemic individuals. Thought suppression, emotional inhibition, attention control, and refraining from criminal behavior are impaired in individual with low-level blood glucose (Gailliot & Baumeister, 2007).

The bottom line is: self-control takes energy and is a limited resource; immoral actions happen not only because people are emotionally driven toward certain rewards, but because, for one reason or another, their “mental brakes” cannot stop their drives. Knowing that, as rational agents, we should allocate wisely our self-control resources: for example, by not putting ourselves in situations where we will have to spend our self-control without a good (in a utility-maximizing or moral sense) return on investment.


References
  • Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego Depletion: Is the Active Self a Limited Resource? Journal of Personality and Social Psychology, 74(5), 1252-1265.
  • Gailliot, M. T., & Baumeister, R. F. (2007). The Physiology of Willpower: Linking Blood Glucose to Self-Control. Personality and Social Psychology Review, 11(4), 303-327.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology, 9999(9999), NA.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Inzlicht, M., & Gutsell, J. N. (2007). Running on Empty: Neural Signals for Self-Control Failure. Psychological Science, 18(11), 933-937.
  • Pessoa, L. (2008). On the Relationship between Emotion and Cognition. Nat Rev Neurosci, 9(2), 148-158.



2/1/08

The Neuroeconomics of Social Norms: A Neo-Rationalist Account

A project I am working on, and my first Latex document (see here for more details on Latex):
If you cannot see the file here, download it there.



1/29/08

Moods and decision-making



In the last issue of Judgment and Decision Making (free online a sharp study by de Vries et al. illustrates how mood affect cognitive processing. After watching either a clip from the Muppet's Show or a clip from Schindler's list, participants played the Iowa Gambling Task (see description here). People who watch the funny clip (the Muppets, as you might have guest) scored better:
[at an early stage of the game] after experiencing the first losses in the bad decks, participants in a happy mood state outperformed participants in a sad mood state (de Vries et al., 2008 p. 48)
After a couple of trials, however, sad and happy subjects scored identically:


The authors do not argue that being in good mood warrants success, but suggest that certain moods may have adaptive value in certain situations (when the optimal choice requires analytical thinking, affective reactions seems more distracting). So next time you have a test, choose the right mood !


References



1/22/08

How to Play the Ultimatum Game? An Engineering Approach to Metanormativity

A paper I wrote with Paul Thagard has been accepted for publication in Philosophical Psychology:

Abstract. The ultimatum game is a simple bargaining situation where the behavior of people frequently contradicts classical game theory. Thus, the commonly observed behavior should beconsidered irrational. We argue that this putative irrationality stems from a wrong conception of metanormativity (the study of norms about the establishment of norms). After discussing different metanormative conceptions, we defend a Quinean, naturalistic approach to the evaluation of norms. After reviewing empirical literature on the ultimatum game, we argue that the common behavior in the ultimatum game is rational and justified. We therefore suggest that the norms of economic rationality should be amended.



1/16/08

New entry of the Stanford Encyclopedia of Philosophy

A new entry might interest decision-making specialists, ethicists and philosophers:

Decision-Making Capacity

In many Western jurisdictions, the law presumes that adult persons, and sometimes children that meet certain criteria, are capable of making their own health care decisions; for example, consenting to a particular medical treatment, or consenting to participate in a research trial. But what exactly does it mean to say that a subject has or lacks the requisite capacity to decide? This last question has to do with what is commonly called “decisional capacity,” a central concept in health care law and ethics, and increasingly an independent topic of philosophical inquiry.

Decisional capacity can be defined as the ability of health care subjects to make their own health care decisions. Questions of ‘capacity’ sometimes extend to other contexts, such as capacity to stand trial in a court of law, and the ability to make decisions that relate to personal care and finances. However, for the purposes of this discussion, the notion of decisional capacity will be limited to health care contexts only; most notably, those where decisions to consent to or refuse treatment are concerned.

The combined theoretical and practical nature of decisional capacity in the area of consent is probably one of the things that makes it so intellectually compelling to philosophers who write about it. But this is still largely uncultivated philosophical territory. One reason is the highly interdisciplinary and rapidly changing nature of the field. Clinical methods and tests to assess capacity are proliferating. The law is also increasingly being called upon to respond to these clinical developments. All of this makes for a very eclectic and challenging field of inquiry. Philosophers must tread carefully if their contributions are to be timely and relevant.





1/14/08

High price makes wine taste better

We already know that Pepsi labels make Pepsi taste better; now it seems that costly wine bottles their content taste better....

Rangel used functional magnetic resonance imaging to observe the brains of 20 people as they were given the same Cabernet Sauvignon and told it cost anything from £2.50 to £45 a bottle. The subjects were asked to describe how pleasurable the wine was to drink, and most described the “higher-priced” wine as much more enjoyable.

The researchers observed changes in a part of the brain known as the medial orbito-frontal cortex, which plays a central role in many types of pleasure. They found that the cortex became more activated by the “expensive” wines than by the cheaper ones. This, said Rangel, showed that the increase in pleasure was real, even though the products were identical.



12/13/07

New draft paper on collective agency

Collective Agency: From Intuitions to Mechanisms (pdf)

Benoît Dubreuil & Benoît Hardy-Vallée

Abstract:

The debate on the nature of collective agency has been at the center of the philosophy of the social sciences for the last century. In recent years, philosophy of language has been the dominant approach to a debate that has often been reduced to the question of the legitimacy of interpreting collective agency on the basis of folk-psychological categories like belief and desire. In this article, we argue that the debate between individualists and collectivists is currently stagnating, but can be revived by a more empirically sensitive approach to agency. Understanding agents, collective or individual, requires an understanding of the mechanisms that bring about and maintain agency. Collective agents, we suggest, are legitimate constructs in social ontology, but their agency is special. Although they implement control mechanisms similar to that of individual agents, they do not have a conscious first-person point of view. Therefore, like individualists, we recognize the ontological salience of individual agency, and like collectivists, we recognize the soundness of collective agents. However, we reject the folk-psychological account of agency (shared by individualists and collectivists) and favor a mechanistic one.



12/12/07

Two New Papers on Natural Rationality

Hardy-Vallée, B. (forthcoming). Decision-Making in the Economy of Nature: Information as Value. In G. Terzis & R. Arp (Eds.), Information and Living Systems: Essays in Philosophy of Biology. Cambridge, MA: MIT Press.

This chapter analyzes and discusses one of the most important uses of information in the biological world: decision-making. I will first present a fundamental principle introduced by Darwin, the idea of an “economy of nature,” by which decision-making can be understood. Following this principle, I then argue that biological decision-making should be construed as goal-oriented, value-based information processing. I propose a value-based account of neural information, where information is primarily economic and relative to goal achievement. If living beings (I focus here on animals) are biological decision-makers, we may expect that their behavior would be coherent with the pursuit of certain goals (either ultimate or instrumental) and that their behavioral control mechanisms would be endowed with goal-directed and valuation mechanisms. These expectations, I argue, are supported by behavioral ecology and decision neuroscience. Together, they provide a rich, biological account of decision-making that should be integrated in a wider concept of ‘natural rationality’.


Hardy-Vallee B. (submitted) Natural Rationality and the Psychology of Decision: Beyond bounded and ecological rationality

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. It pictures decision-making mostly as a deliberative task and rationality as a matter of idealization. This conception also suggests that psychology should either document human failures to comply with rational-choice standards (bounded rationality) or detail how mental mechanisms are ecologically rational (ecological rationality). This conception, I argue, runs into many problems: descriptive (section 2), conceptual (section 3) and normative (section 4). I suggest that psychology and philosophy need another—wider—conception of rationality, that goes beyond bounded and ecological rationality (section 5).



12/6/07

Full reference

If by any chance you want to cite this paper, here is the full bibliographic record (meaning it's finally published!!):

Hardy-Vallee, B. (2007). Decision-Making: A Neuroeconomic Perspective. Philosophy Compass, 2(6), 939-953. http://dx.doi.org/10.1111/j.1747-9991.2007.00099.x



11/30/07

Values and regrets

Regret, from a decision-making point of view, is a countefactual post-hoc valuation of a decision. Regret and rejoicing are two varieties of remembered utility. Without a doubt, neuroeconomics can be informative about the neural mechanisms of regrets. As I once argued, however, we need a neuroeconomic account of valuation with clear distinction between differnt processes, mechanisms and functions, and between the different contributions of neural structures. A nice example:

In this paper:

it is said that "a cortical network, consisting of the medial orbitofrontal cortex, left superior frontal cortex, right angular gyrus, and left thalamus, correlates with the degree of regret. A different network, including the rostral anterior cingulate, left hippocampus, left ventral striatum, and brain stem/midbrain correlated with rejoice."

But, in another paper, regret, or its computational cousin "fictive learning signals", is now an outcome of midbrain dopaminergic systems:

How to fit it all together? I am not sure yet but here are 2 possibilities:

1-a "same-level" explanation: dopaminergic systems and cortical networks contribute together to the feeling, emotions, and processing of regret/rejoicing. They are two faces of the same coin.

2-a "different-level" explanation: dopaminergic systems and cortical networks are two layers in a hierarchical multi-level architecture (that may have other level, e.g. molecular, etc.).

suggestions, ideas?



11/29/07

Probability matching: A brief intro

Blogging on Peer-Reviewed Research

Probability matching (PM) is a widely observed phenomenon in which subjects match the probability of choices with the probability of reward in a stochastic context. For instance, suppose one has to choose between two sources of reward: one (A) that gives reward on 70% of the occasions, and the other (B) on 30%. The rational, utility-maximizing strategy is to choose always A. The matching strategy consists in choosing A on 70% of the occasions and B on 30% of the occasions. While the former leads to a reward 7 times out of 10, the latter will be rewarding only 5.8 times out of 10 [(0.7 x 0.7) + (0.3 x 0.3) = 0,58]. Clearly, the maximizing strategy outperforms the matching strategy.

The maximizing strategy, however, is rarely found in the biological world. From bees to birds to humans, most animals match probabilities (Erev & Barron, 2005; C.R. Gallistel, 1990; Greggers & Menzel, 1993; Anil K. Seth, 2001; Vulkan, 2000). In typical experiments with humans, subjects are asked to predict which light will flash (left or right for instance) and have a monetary reward for every correct answer. Rats has to forage for food in a T-maze, pigeons press levers that reward food pellets of different size with different probability, while bees forage artificial flowers with different sucrose delivery rate. In all cases, the problem amount to efficiently maximize reward from various sources, and the most common solution is PM. (There are variations, but PM predicts reliably subjects’ behavior). Different probability distributions, rewards or context variations do not altered the results. Hence it is a particularly robust phenomenon, and a clear example of discrepancy between standards of rationality and agent’s behavior. Three different perspectives could then be adopted: 1) subjects are irrational, 2) subjects are boundedly rational and hence cannot avoid such mistakes or 3) subjects are in fact ecologically rational and hence PM is not irrational.

According to the first one, mostly held in traditional normative economics and decision theory (e.g., Savage, 1954), this behavior is blatantly irrational. Rational agents rank possible actions according to the product of the probability and utility of the consequences of actions, and they choose those that maximize subjective expected utility. In opting for the matching strategy, subjects violate the axioms of decision theory, and hence their behavior cannot be rationalized. In other words, their preferences cannot be construed as maximizing a utility function: it is “an experimental situation which is essentially of an economic nature in the sense of seeking to achieve a maximum of expected reward, and yet the individual does not in fact, at any point, even in a limit, reach the optimal behavior” (Arrow, 1958, p. 14).

Another perspective, found in the “heuristic and biases” tradition (Kahneman et al., 1982; Kahneman & Tversky, 1979) also considers that it is irrational but suggests why this particular pattern is so common. The boundedly rational mind cannot always proceed to compute subjective expected utilities but rely on simplifying tricks: heuristics. One heuristic that may explain human shortcomings in this case is representativeness: judging the likelihood of an outcome by the degree to which it is representative of a series. This is how the phenomena known as the gambler’s fallacy (the belief that an event is more likely to occur because it has not happened for a period of time) may be explained: “there was five heads in a row; there cannot be another one!” This heuristics may also explain why subjects match probabilities: it is more likely that if the 70% source was rewarding in the last round, it would be better to try the 30% a little in order to maximize reward. Hence PM is irrational, but this irrationality is excusable, albeit without any particular significance.

The third perspective, that could be either named “ecological rationality” or “evolutionary psychology” (Barkow et al., 1992; Cosmides & Tooby, 1996; G. Gigerenzer, 2000; Gerd Gigerenzer et al., 1999) argue instead that humans and animals are not really irrational, but adapted to certain ecological conditions whose absence explains apparent irrationality. Ecologically rational heuristics are not erroneous processes, but mechanisms tailored to fit both the structure of the environment and the mind: they are fast, frugal and smart. PM can be rational in some context and irrational in some others: when animals are foraging and competing with conspecifics for resources, PM is the optimal strategy, as illustrated by Gigerenzer & Fiedler:

(…) if one considers a natural environment in which animals are not as socially isolated as in a T-maze and in which they compete with one another for food, the situation looks different. Assume that there are a large number of rats and two patches, left and right, with an 80:20 distribution of food. If all animals maximized on an individual level, then they all would end up in the left part, and the few deviating from this rule might have an advantage. Under appropriate conditions, one can show that probability matching is the more rational strategy in a socially competitive environment (G. Gigerenzer & Fiedler, forthcoming)


This pattern of behavior and spatial distribution correspond to the Ideal Free Distribution (IFD) model used in behavioral ecology (Weber, 1998). Derived from optimal foraging theory (Stephens & Krebs, 1986), the IFD predicts that the distribution of individuals between food patches will match the distribution of resources, a pattern observed in many occasions in animals and humans (Grand, 1997; Harper, 1982; Lamb & Ollason, 1993; Madden et al., 2002; Sokolowski et al., 1999).

There are of courses discrepancies between the model and the observed behavior, but foraging groups tend to approximate the IFD. This supports the claim that PM is a rational heuristics only in as socially competitive environment: it could also be construed as a mixed-strategy Nash equilibrium in a multiplayer repeated game (Glimcher, 2003, p. 295) or as an evolutionarily stable strategy, that is, a strategy that could not be invaded by another competing strategy in a population who adopt it (C. R. Gallistel, 2005). Seth’s simulations (in press) showed that a simple behavioral rule may account for both individual and collective matching behavior.


  • Arrow, K. J. (1958). Utilities, Attitudes, Choices: A Review Note. Econometrica, 26(1), 1-23.
  • Barkow, J. H., Cosmides, L., & Tooby, J. (1992). The Adapted Mind : Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press.
  • Cosmides, L., & Tooby, J. (1996). Are Humans Good Intuitive Statisticians after All? Rethinking Some Conclusions from the Literature on Judgment under Uncertainty. Cognition, 58, 1-73.
  • Erev, I., & Barron, G. (2005). On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies. Psychol Rev, 112(4), 912-931.
  • Gallistel, C. R. (1990). The Organization of Learning. Cambridge: MIT Press.
  • Gallistel, C. R. (2005). Deconstructing the Law of Effect. Games and Economic Behavior, 52(2), 410-423.
  • Gigerenzer, G. (2000). Adaptive Thinking : Rationality in the Real World. New York: Oxford University Press.
  • Gigerenzer, G., & Fiedler, K. (forthcoming). Minds in Environments: The Potential of an Ecological Approach to Cognition. Manuscript submitted for publication.
  • Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple Heuristics That Make Us Smart. New York: Oxford University Press.
  • Glimcher, P. W. (2003). Decisions, Uncertainty, and the Brain : The Science of Neuroeconomics. Cambridge, Mass. ; London: MIT Press.
  • Grand, T. C. (1997). Foraging Site Selection by Juvenile Coho Salmon: Ideal Free Distributions of Unequal Competitors. Animal Behaviour, 53(1), 185-196.
  • Greggers, U., & Menzel, R. (1993). Memory Dynamics and Foraging Strategies of Honeybees. Behavioral Ecology and Sociobiology, V32(1), 17-29.
  • Harper, D. G. C. (1982). Competitive Foraging in Mallards: "Ideal Free' Ducks. Animal Behaviour, 30(2), 575-584.
  • Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under Uncertainty : Heuristics and Biases. Cambridge ; New York: Cambridge University Press.
  • Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47, 263-291.
  • Lamb, A. E., & Ollason, J. G. (1993). Foraging Wood-Ants Formica Aquilonia Yarrow (Hymenoptera: Formicidae) Tend to Adopt the Ideal Free Distribution. Behavioural Processes, 28(3), 189-198.
  • Madden, G. J., Peden, B. F., & Yamaguchi, T. (2002). Human Group Choice: Discrete-Trial and Free-Operant Tests of the Ideal Free Distribution. J Exp Anal Behav, 78(1), 1-15.
  • Savage, L. J. (1954). The Foundations of Statistics. New York,: Wiley.
  • Seth, A. K. (2001). Modeling Group Foraging: Individual Suboptimality, Interference, and a Kind of Matching. Adaptive Behavior, 9(2), 67-89.
  • Seth, A. K. (in press). The Ecology of Action Selection: Insights from Artificial Life. Phil. Trans. Roy. Soc. B. , http://www.nsi.edu/users/seth/Papers/Seth_PTRS.pdf.
  • Sokolowski, M. B., Tonneau, F., & Freixa i Baque, E. (1999). The Ideal Free Distribution in Humans: An Experimental Test. Psychon Bull Rev, 6(1), 157-161.
  • Stephens, D. W., & Krebs, J. R. (1986). Foraging Theory. Princeton, N.J.: Princeton University Press.
  • Vulkan, N. (2000). An Economist's Perspective on Probability Matching. Journal of Economic Surveys, 14(1), 101-118.
  • Weber, T. P. (1998). News from the Realm of the Ideal Free Distribution. Trends in Ecology & Evolution, 13(3), 89-90.




I'm so sad I would accept anything

no I am good, thanks, I just found a new study on the Ultimatum Game by Harlé & Sanfey that show that people in a sad (but not in a happy) mood reject more 'unfair' offers:


Our findings are consistent with our initial hypothesis, namely that sadness may focus the responder's attention on the negative emotional consequences of unfair offers rather than the positive impact of accepting such offers (i.e., monetary reward), thereby prompting lower acceptance rates of unfair offers. In addition, although information processing was not explicitly tested, our findings are consistent with motivational theories on the processing consequences of affect, whereby sadness is likely to promote a more vigilant processing style, reflecting a motivation to enhance the processing of information related to potentially threatening and harmful situations (Forgas, 2003). Such enhanced processing would again make individuals in a sad mood more likely to focus on the threatening aspect of being treated unfairly (in contrast with individuals in neutral or positive moods), thus potentially leading to more rejections of these unfair offers.

Harlé, K. M., & Sanfey, A. G. (2007). Incidental sadness biases social economic decisions in the Ultimatum Game. Emotion, 7(4), 876-81.



11/27/07

Turkey, cooperation and serotonin

Tryptophan is a chemical precursor to serotonin that can be found in turkey. A recent study suggest that it could reduce cooperation in prisoner's dilemma:

half of the volunteers were given a drink that depleted their tryptophan levels prior to the start of the game, thereby decreasing serotonin levels in their brain. Rogers and his team found that dampening serotonin activity significantly decreased the level of cooperation among the players, and that this group also rated fellow players as less trustworthy. "The findings suggest that a serotonin deficit might impair sustained cooperation," says Rogers.[technologyreview]




11/13/07

Decision-Making in Robotics and Psychology: A Distributed Account

Forthcoming a special issue of New Ideas in Psychology on Cognitive Robotics & Theoretical Psychology, edited by Tom Ziemke & Mark Bickhard:

Hardy-Vallée, B. (in press). Decision-Making in Robotics and Psychology: A Distributed Account. New Ideas in Psychology

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. The psychological study of decision-making assumes a certain conception of its nature and mechanisms that has been shown wrong by research in robotics. Robotics indicates that decision-making is not—or at least not only—an intellectual task, but also a process of dynamic behavioral control, mediated by embodied and situated sensorimotor interaction. The implications of this conception for psychology are discussed.
[PDF]



11/11/07

Decision-Making: A Neuroeconomic Perspective

I am pleased to announce that the following paper is now available as an OnlineEarly Article:


Abstract. This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics, and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality.