Natural Rationality | decision-making in the economy of nature
Showing posts with label rationality. Show all posts
Showing posts with label rationality. Show all posts

3/11/08

The bounded rationality of self-control

ResearchBlogging.org Rogue traders such as Jérôme Kerviel or Nick Leeson engage in criminal, fraudulent and high-risk financial activities that often result in huge losses ($7 billion for Kerviel) or financial catastrophe (the bankruptcy of the 233 years-old bank who employed Leeson). Why would anyone do that?

A popular answer is that money is like a drug, and that Kerviel had behaved "like a financial drug addict" . And truly, it is. We crave money and feel its rewarding properties when our subcortical areas light up as if we were having sex or eating Port-Royal Cupcakes (just reading the list of ingredients of the latter is enough for me!). Money hits our sweet spot, and elicits activity in emotional and emotion-related areas. Thus rogue traders are like cocaine addicts, unable to stop the never-ending search for the ultimate buzz.

This is fine, but incomplete and partly misleading. We all have temptations, drives, desires, emotions, addictions, etc., and some of us experience them more vividly. The interesting question is not how intense the money thrill is, but how weak is self-control can be. By “self-control”, I mean the vetoing capacity we have: when we resist eating fat food, smoking (oh, just one, I swear) another cigarette, insulting that person that laugh at us, flirting with that cute colleague of yours, etc. Living in society requires that we regulate our behavior and—more often than not—doing what we should do instead of what we want to do. It seems that rogue traders, like addicts and criminals, lacks a certain capacity to implement self-control and normative regulation.

Traditional accounts of self-control construe this capacity as a cognitive, rational faculty. New developments in psychology suggest that it is more like a muscle than a cognitive process. If self-control is a cognitive process, activating it should speed up further self-control since it becomes highly accessible; priming, for instance, speeds up recognition. To the contrary, if self-control is a limited resource, using it should impair or slow down further self-control (since part of the resource will be spent the first time). Many experiments support the second options: self-control and inhibitory control are limited resources, a phenomenon Roy Baumeister and his colleagues called ego depletion: the

temporary reduction in the self's capacity or willingness to engage in volitional action (including controlling the environment, controlling the self, making choices, and initiating action) caused by prior exercise of volition. (Baumeister et al., 1998, p. 1253)

For instance, subjects who have to suppress their emotions while watching an upsetting movie perform worse on the Stroop task (Inzlicht & Gutsell, 2007). EEG indicates less activity in the ACC in subjects who had to inhibit their affective reactions. Subjects who had to reluctantly eat radishes abandon problem-solving earlier than subject who had chocolate willingly. Taking responsibility for and producing voluntarily a counterattitudinal speech (a speech that expresses an opinion contrary to its locutor’s) also reduced perseverance; producing the speech without taking responsibility did not) (Baumeister et al., 1998).

Self-control literally requires energy. Subjects asked to suppress facial reactions (e.g. smiles) when watching a movie have lower blood glucose levels, suggesting higher energy consumption. Control subjects (free to react how they want) had the same blood glucose levels before and after the movie, and performed better than control subjects on a Stroop Task. Restoring glucose levels with a sugar-sweetened lemonade (instead of artificially-sweetened beverages, without glucose) also increases performance. Self-control failures happen more often in situation where blood glucose levels is low. In a literature review, Gailliot et al show that lack of cognitive, behavioral and emotional control is systematically associated with hypoglycemia or hypoglycemic individuals. Thought suppression, emotional inhibition, attention control, and refraining from criminal behavior are impaired in individual with low-level blood glucose (Gailliot & Baumeister, 2007).

The bottom line is: self-control takes energy and is a limited resource; immoral actions happen not only because people are emotionally driven toward certain rewards, but because, for one reason or another, their “mental brakes” cannot stop their drives. Knowing that, as rational agents, we should allocate wisely our self-control resources: for example, by not putting ourselves in situations where we will have to spend our self-control without a good (in a utility-maximizing or moral sense) return on investment.


References
  • Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego Depletion: Is the Active Self a Limited Resource? Journal of Personality and Social Psychology, 74(5), 1252-1265.
  • Gailliot, M. T., & Baumeister, R. F. (2007). The Physiology of Willpower: Linking Blood Glucose to Self-Control. Personality and Social Psychology Review, 11(4), 303-327.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology, 9999(9999), NA.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Inzlicht, M., & Gutsell, J. N. (2007). Running on Empty: Neural Signals for Self-Control Failure. Psychological Science, 18(11), 933-937.
  • Pessoa, L. (2008). On the Relationship between Emotion and Cognition. Nat Rev Neurosci, 9(2), 148-158.



1/23/08

Circular altruism at Overcoming Bias

I am a big fan of Overcoming Bias, a blog who "want to avoid, or at least minimize, the startling systematic mistakes [in human reasoning] that science is discovering. If we know the common patterns of error or self-deception, maybe we can work around them ourselves, or build social structures for smarter groups. We know we aren't perfect, and can't be perfect, but trying is better than not trying."

I just wanted to recommend one of their post on "Circular Altruism" that explores the relationships between probabilistic reasonings, its biases, and altruism.



1/22/08

How to Play the Ultimatum Game? An Engineering Approach to Metanormativity

A paper I wrote with Paul Thagard has been accepted for publication in Philosophical Psychology:

Abstract. The ultimatum game is a simple bargaining situation where the behavior of people frequently contradicts classical game theory. Thus, the commonly observed behavior should beconsidered irrational. We argue that this putative irrationality stems from a wrong conception of metanormativity (the study of norms about the establishment of norms). After discussing different metanormative conceptions, we defend a Quinean, naturalistic approach to the evaluation of norms. After reviewing empirical literature on the ultimatum game, we argue that the common behavior in the ultimatum game is rational and justified. We therefore suggest that the norms of economic rationality should be amended.



12/5/07

The Present and Future of Artificial Rationality

A little digression from topics usually discussed here, but I thought it might be interesting.

First, what robots can do today? Well a lot of things. The winner of a recent contest sponsored by iRobot can:

- retrieve water
- water plants
- fill a dog water bowl
- control a VCR and TV
- turn lights and other appliances on/off
- play music
- dance and entertain
- provide mobile video security for the home
- remind the elderly to take medicine

the perfect roommate!


Look how humanoid robots get credible. Here are Sony dancing robots:
(you tube)



an "actroid" robot: (youtube source)



You felt weird when you saw here? That is what roboticist call the uncanny valley.:

as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion. However, as the appearance and motion continue to become less distinguishable from a human being, the emotional response becomes positive once more and approaches human-to-human empathy levels.


It may have to do with our folk-psychology. Gray et al. (2007) conducted a series of experiments that aimed at understanding our intuitive conception of "mindedness". (see this post) Subjects answered a web survey about pairwise comparison on a five-point scale of different human and non-human characters on many mental capacities. For instance they had to decide who, between a baby and a chimpanzee, could feel more pain. The experimenters then identified the factors able to account for the results. The statistical analysis (a principal component factor analysis) revealed that the similarity space of the “agent” concept is made of an Agency dimension (self-control, planning, communication, etc.) and an Experience dimension (pain, pleasure, consciousness, etc.). Babies, dogs and chimpanzees score high in Experience, but low in Agency; adult humans score high on both dimensions, while God scores high in Agency only. Robots have a middle score in agency (between dead people and God) and a null score on experience. So the uncanny feeling might have something to do with fact that we usually have a revulsion for dead bodies; a humanoid robots might triggers the same mechanisms.



Honda's Asimo (less uncanny) is also getting smarter: see how (he? she? it?) navigates his/her/its environment and interact with humans.

(youtube source)



What about the future?

According to David Levy, in his new book Love and Sex with Robots. The Evolution of Human-Robot Relationships, people will one day have sex with robots (remember Jude Law in Artificial Intelligence?).

"There will be different personalities and different likes and dislikes," he said. "When you buy your robot, you'll be able to select what kind of personality it will have. It'll be like ordering something on the Internet. What kind of emotional makeup will it have? How it should look. The size and hair color. The sound of its voice. Whether it's funny, emotional, conservative.

(source)


when? In about 30 years. And that is where I start laughing (loudly). Since the begining of AI, researchers always promise that in the next 30 or 50 years machines will be thinking. See what Turing said in 1950:

I believe that in about fifty years' time it will be possible, to programme computers, (..,) to make them play the imitation game [The Turing Test] so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning


Find and old scientific american or any other science journal, and you will see that computer scientist routinely make the same predictions. Always between 30 and 50 years. Enough time to be forgotten if you're wrong, but just enough to still be there and say "I told you!" if you're wright.

Not that I don't believe that machine can be conscious or intelligent. I do. We are all conscious evolved machine, so a priori, there are no reasons why AI (or more probably, A-Life) could not succed. The thing is just that people should remember that futurology in not an exact science...

One thing, however, that I am ready to put my money on, is the future of robotic warfare. Trust the army for that, if robots can be an advantage in a war, they will develop it. A recent article in the Armed Forces Journal envisioned futuristic scenarios of robotic warfare (not for futurology, however, but for ethics). I liked the conclusion, though:

There are no new ethical dilemmas created by robots being used today. Every system is either under continuous, direct control by humans, or performing limited semiautonomous actions that are dictated by humans. These robots represent nothing more than highly sophisticated weapons, and the user retains full responsibility for their implementation.

The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely auton¬omous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.

It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.










11/29/07

Probability matching: A brief intro

Blogging on Peer-Reviewed Research

Probability matching (PM) is a widely observed phenomenon in which subjects match the probability of choices with the probability of reward in a stochastic context. For instance, suppose one has to choose between two sources of reward: one (A) that gives reward on 70% of the occasions, and the other (B) on 30%. The rational, utility-maximizing strategy is to choose always A. The matching strategy consists in choosing A on 70% of the occasions and B on 30% of the occasions. While the former leads to a reward 7 times out of 10, the latter will be rewarding only 5.8 times out of 10 [(0.7 x 0.7) + (0.3 x 0.3) = 0,58]. Clearly, the maximizing strategy outperforms the matching strategy.

The maximizing strategy, however, is rarely found in the biological world. From bees to birds to humans, most animals match probabilities (Erev & Barron, 2005; C.R. Gallistel, 1990; Greggers & Menzel, 1993; Anil K. Seth, 2001; Vulkan, 2000). In typical experiments with humans, subjects are asked to predict which light will flash (left or right for instance) and have a monetary reward for every correct answer. Rats has to forage for food in a T-maze, pigeons press levers that reward food pellets of different size with different probability, while bees forage artificial flowers with different sucrose delivery rate. In all cases, the problem amount to efficiently maximize reward from various sources, and the most common solution is PM. (There are variations, but PM predicts reliably subjects’ behavior). Different probability distributions, rewards or context variations do not altered the results. Hence it is a particularly robust phenomenon, and a clear example of discrepancy between standards of rationality and agent’s behavior. Three different perspectives could then be adopted: 1) subjects are irrational, 2) subjects are boundedly rational and hence cannot avoid such mistakes or 3) subjects are in fact ecologically rational and hence PM is not irrational.

According to the first one, mostly held in traditional normative economics and decision theory (e.g., Savage, 1954), this behavior is blatantly irrational. Rational agents rank possible actions according to the product of the probability and utility of the consequences of actions, and they choose those that maximize subjective expected utility. In opting for the matching strategy, subjects violate the axioms of decision theory, and hence their behavior cannot be rationalized. In other words, their preferences cannot be construed as maximizing a utility function: it is “an experimental situation which is essentially of an economic nature in the sense of seeking to achieve a maximum of expected reward, and yet the individual does not in fact, at any point, even in a limit, reach the optimal behavior” (Arrow, 1958, p. 14).

Another perspective, found in the “heuristic and biases” tradition (Kahneman et al., 1982; Kahneman & Tversky, 1979) also considers that it is irrational but suggests why this particular pattern is so common. The boundedly rational mind cannot always proceed to compute subjective expected utilities but rely on simplifying tricks: heuristics. One heuristic that may explain human shortcomings in this case is representativeness: judging the likelihood of an outcome by the degree to which it is representative of a series. This is how the phenomena known as the gambler’s fallacy (the belief that an event is more likely to occur because it has not happened for a period of time) may be explained: “there was five heads in a row; there cannot be another one!” This heuristics may also explain why subjects match probabilities: it is more likely that if the 70% source was rewarding in the last round, it would be better to try the 30% a little in order to maximize reward. Hence PM is irrational, but this irrationality is excusable, albeit without any particular significance.

The third perspective, that could be either named “ecological rationality” or “evolutionary psychology” (Barkow et al., 1992; Cosmides & Tooby, 1996; G. Gigerenzer, 2000; Gerd Gigerenzer et al., 1999) argue instead that humans and animals are not really irrational, but adapted to certain ecological conditions whose absence explains apparent irrationality. Ecologically rational heuristics are not erroneous processes, but mechanisms tailored to fit both the structure of the environment and the mind: they are fast, frugal and smart. PM can be rational in some context and irrational in some others: when animals are foraging and competing with conspecifics for resources, PM is the optimal strategy, as illustrated by Gigerenzer & Fiedler:

(…) if one considers a natural environment in which animals are not as socially isolated as in a T-maze and in which they compete with one another for food, the situation looks different. Assume that there are a large number of rats and two patches, left and right, with an 80:20 distribution of food. If all animals maximized on an individual level, then they all would end up in the left part, and the few deviating from this rule might have an advantage. Under appropriate conditions, one can show that probability matching is the more rational strategy in a socially competitive environment (G. Gigerenzer & Fiedler, forthcoming)


This pattern of behavior and spatial distribution correspond to the Ideal Free Distribution (IFD) model used in behavioral ecology (Weber, 1998). Derived from optimal foraging theory (Stephens & Krebs, 1986), the IFD predicts that the distribution of individuals between food patches will match the distribution of resources, a pattern observed in many occasions in animals and humans (Grand, 1997; Harper, 1982; Lamb & Ollason, 1993; Madden et al., 2002; Sokolowski et al., 1999).

There are of courses discrepancies between the model and the observed behavior, but foraging groups tend to approximate the IFD. This supports the claim that PM is a rational heuristics only in as socially competitive environment: it could also be construed as a mixed-strategy Nash equilibrium in a multiplayer repeated game (Glimcher, 2003, p. 295) or as an evolutionarily stable strategy, that is, a strategy that could not be invaded by another competing strategy in a population who adopt it (C. R. Gallistel, 2005). Seth’s simulations (in press) showed that a simple behavioral rule may account for both individual and collective matching behavior.


  • Arrow, K. J. (1958). Utilities, Attitudes, Choices: A Review Note. Econometrica, 26(1), 1-23.
  • Barkow, J. H., Cosmides, L., & Tooby, J. (1992). The Adapted Mind : Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press.
  • Cosmides, L., & Tooby, J. (1996). Are Humans Good Intuitive Statisticians after All? Rethinking Some Conclusions from the Literature on Judgment under Uncertainty. Cognition, 58, 1-73.
  • Erev, I., & Barron, G. (2005). On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies. Psychol Rev, 112(4), 912-931.
  • Gallistel, C. R. (1990). The Organization of Learning. Cambridge: MIT Press.
  • Gallistel, C. R. (2005). Deconstructing the Law of Effect. Games and Economic Behavior, 52(2), 410-423.
  • Gigerenzer, G. (2000). Adaptive Thinking : Rationality in the Real World. New York: Oxford University Press.
  • Gigerenzer, G., & Fiedler, K. (forthcoming). Minds in Environments: The Potential of an Ecological Approach to Cognition. Manuscript submitted for publication.
  • Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple Heuristics That Make Us Smart. New York: Oxford University Press.
  • Glimcher, P. W. (2003). Decisions, Uncertainty, and the Brain : The Science of Neuroeconomics. Cambridge, Mass. ; London: MIT Press.
  • Grand, T. C. (1997). Foraging Site Selection by Juvenile Coho Salmon: Ideal Free Distributions of Unequal Competitors. Animal Behaviour, 53(1), 185-196.
  • Greggers, U., & Menzel, R. (1993). Memory Dynamics and Foraging Strategies of Honeybees. Behavioral Ecology and Sociobiology, V32(1), 17-29.
  • Harper, D. G. C. (1982). Competitive Foraging in Mallards: "Ideal Free' Ducks. Animal Behaviour, 30(2), 575-584.
  • Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under Uncertainty : Heuristics and Biases. Cambridge ; New York: Cambridge University Press.
  • Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47, 263-291.
  • Lamb, A. E., & Ollason, J. G. (1993). Foraging Wood-Ants Formica Aquilonia Yarrow (Hymenoptera: Formicidae) Tend to Adopt the Ideal Free Distribution. Behavioural Processes, 28(3), 189-198.
  • Madden, G. J., Peden, B. F., & Yamaguchi, T. (2002). Human Group Choice: Discrete-Trial and Free-Operant Tests of the Ideal Free Distribution. J Exp Anal Behav, 78(1), 1-15.
  • Savage, L. J. (1954). The Foundations of Statistics. New York,: Wiley.
  • Seth, A. K. (2001). Modeling Group Foraging: Individual Suboptimality, Interference, and a Kind of Matching. Adaptive Behavior, 9(2), 67-89.
  • Seth, A. K. (in press). The Ecology of Action Selection: Insights from Artificial Life. Phil. Trans. Roy. Soc. B. , http://www.nsi.edu/users/seth/Papers/Seth_PTRS.pdf.
  • Sokolowski, M. B., Tonneau, F., & Freixa i Baque, E. (1999). The Ideal Free Distribution in Humans: An Experimental Test. Psychon Bull Rev, 6(1), 157-161.
  • Stephens, D. W., & Krebs, J. R. (1986). Foraging Theory. Princeton, N.J.: Princeton University Press.
  • Vulkan, N. (2000). An Economist's Perspective on Probability Matching. Journal of Economic Surveys, 14(1), 101-118.
  • Weber, T. P. (1998). News from the Realm of the Ideal Free Distribution. Trends in Ecology & Evolution, 13(3), 89-90.




10/23/07

Exploration, Exploitation and Rationality

A little introduction to what I consider to be the Mother of All Problems: the exploration-exploitation trade-off.

Let's firt draw a distinction between first- and second-order uncertainty. Knowing that a source of reward (or money, or food, etc.) will be rewarding in 70% of the occasions is uncertain knowledge because one does not know for sure what will be the next outcome (one can only know that there is a 70% probability that it is a reward). In some situations however, uncertainty can be radical, or second-order uncertainty: even the probabilities are unknown. Under radical uncertainty, cognitive agents must learn reward probabilities. Learners must, at the same time, explore their environment in order to gather information about its payoff structure and exploit this information to obtain reward. They face a deep problem—known as the exploration/exploitation tradeoff—because they cannot do both at the same time: you cannot explore all the time, you cannot exploit all the time, you must reduce exploration but cannot eliminate it. This tradeoff is usually modeled with the K-armed bandit problem.

Suppose an agent has n coins to spend in a slot machine with K arms (here K=2 and we will suppose that one arm is high-paying and the other low-paying, although the agent does not know that). The only way the agent has access to the arms’ rate of payment – and obtains reward – is by pulling them. Hence she must find an optimal tradeoff when spending its coins: trying another arm just to see how it pays or staying with the one who already paid? The goal is not only to maximize reward, but also to maximize reward while obtaining information about the arm’s rate. The process can be erroneous in two different ways: either the player can be victim of a false negative (a low-paying sequence of the high-paying arm) or false positive (a high-paying sequence paying of the low-paying paying arm).

To solve this problem, the optimal solution is to compute an index for every arm, updating this index according to the arm’s payoff and choosing the arm that has the greater index (Gittins, 1989). In the long run, this strategies amount to following decision theory after a learning phase. But as soon as switching from one arm to another has a cost, as Banks & Sundaram (1994) showed, the index strategies cannot converge towards an optimal solution. A huge literature in optimization theory, economics, management and machine learning addresses this problem (Kaelbling et al., 1996; Sundaram, 2003; Tackseung, 2004). Studies of humans or animals explicitly submitted to bandit problems, however, show that subjects tend to rely on the matching strategy (Estes, 1954). They match the probability of action with the probability of reward. In one study, for instance, (Meyer & Shi, 1995), subjects were required to select between two icons displayed on a computer screen; after each selection, a slider bar indicated the actual amount of reward obtained. The matching strategy predicted the subject’s behavior, and the same results hold for monkeys in a similar task (Bayer & Glimcher, 2005; Morris et al., 2006).

The important thing with this trade-off, is its lack of a priori solutions. Decision theory works well when we know the probabilities and the utilities, but what can we do when we don’t have them? We learn. This is the heart of natural rationality: crafting solutions—under radical uncertainty and non-stationary environments—for problems that may not have an optimal solution. Going from second- to first-order uncertainty.



See also:


References

  • Banks, J. S., & Sundaram, R. K. (1994). Switching Costs and the Gittins Index. Econometrica: Journal of the Econometric Society, 62(3), 687-694.
  • Bayer, H. M., & Glimcher, P. W. (2005). Midbrain Dopamine Neurons Encode a Quantitative Reward Prediction Error Signal. Neuron, 47(1), 129.
  • Estes, W. K. (1954). Individual Behavior in Uncertain Situations: An Interpretation in Terms of Statistical Association Theory. In R. M. Thrall, C. H. Coombs & R. L. Davies (Eds.), Decision Processes (pp. 127-137). New York: Wiley.
  • Gittins, J. C. (1989). Multi-Armed Bandit Allocation Indices. New York: Wiley.
  • Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4, 237-285.
  • Meyer, R. J., & Shi, Y. (1995). Sequential Choice under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem. Management Science, 41(5), 817-834.
  • Morris, G., Nevet, A., Arkadir, D., Vaadia, E., & Bergman, H. (2006). Midbrain Dopamine Neurons Encode Decisions for Future Action. Nat Neurosci, 9(8), 1057-1063.
  • Sundaram, R. K. (2003). Generalized Bandit Problems: Working Paper, Stern School of Business.
  • Tackseung, J. (2004). A Survey on the Bandit Problem with Switching Costs. De Economist, V152(4), 513-541.
  • Yen, G., Yang, F., & Hickey, T. (2002). Coordination of Exploration and Exploitation in a Dynamic Environment. International Journal of Smart Engineering System Design, 4(3), 177-182.



10/20/07

Rachlin On Rationality and Addiction

Blogging on Peer-Reviewed Research
Psychologist H. Rachlin (author of The Science of Self-Control) proposes an excellent analysis of the relationships between rationality and addiction. One the one hand, addicts are rational utility-maximizer: they use substances they like. On the other hand, destroying your life with drug is clearly irrational. He suggest first to consider rationality as "overt behavioral patterns rather than as a smoothly operating logic mechanism in the head" or "as consistency in choice" (the economic notion of rationality). He conceives rationality as "a pattern of predicting your own future behavior and acting upon those predictions to maximize reinforcement in the long run." Addicts are irrational "to the extent that they fail to make such predictions and to take such actions. "

References.
  • Rachlin Howard, In what sense are addicts irrational?, Drug and Alcohol Dependence, Volume 90, Supplement 1, Behavioral and Economic Perspectives in Drug Abuse Research, September 2007, Pages S92-S99.








10/12/07

Self-control is a Scarce Resource

We all have a limited quantity of energy to allow to self-control:

In a recent study, Michael Inzlicht of the University of Toronto Scarborough and colleague Jennifer N. Gutsell offer an account of what is happening in the brain when our vices get the better of us.

Inzlicht and Gutsell asked participants to suppress their emotions while watching an upsetting movie. The idea was to deplete their resources for self-control. The participants reported their ability to suppress their feelings on a scale from one to nine. Then, they completed a Stroop task, which involves naming the color of printed words (i.e. saying red when reading the word “green” in red font), yet another task that requires a significant amount of self-control.

The researchers found that those who suppressed their emotions performed worse on the Stroop task, indicating that they had used up their resources for self-control while holding back their tears during the film.

An EEG, performed during the Stroop task, confirmed these results. Normally, when a person deviates from their goals (in this case, wanting to read the word, not the color of the font), increased brain activity occurs in a part of the frontal lobe called the anterior cingulate cortex, which alerts the person that they are off-track. The researchers found weaker activity occurring in this brain region during the Stroop task in those who had suppressed their feelings. In other words, after engaging in one act of self-control this brain system seems to fail during the next act.
http://www.psychologicalscience.org/media/releases/2007/inzlicht.cfm
(via Cognews)
  • Inzlicht, M., & Gutsell, J. N. (in press). Running on empty: Neural signals for self-control failure. Psychological Science. (preprint)




A roundup of the most popular posts

According to the stats, the 5 mots popular posts on Natural Rationality are:

  1. Strong reciprocity, altruism and egoism
  2. What is Wrong with the Psychology of Decision-Making?
  3. My brain has a politics of its own: neuropolitic musing on values and signal detection
  4. Rational performance and behavioral ecology
  5. Natural Rationality for Newbies

Enjoy!



10/5/07

Ape-onomics: Chimps in the Ultimatum Game and Rationality in the Wild

I recently discussed the experimental study of the Ultimatum Game, and showed that it has been studied in economics, psychology, anthropology, psychophysics and genetics. Now primatologists/evolutionary anthropologists Keith Jensen, Josep Call and Michael Tomasello (the same team that showed that chimpanzees are vengeful but not spiteful see 2007a) had chimpanzees playing the Ultimatum, or more precisely, a mini-ultimatum, where proposers can make only two offers, for instance a fair vs. unfair one, or fair vs. an hyperfair, etc. Chimps had to split grapes. The possibilities were (in x/y pairs, x is the proposer, y, the responder)
  • 8/2 versus 5/5
  • 8/2 versus 2/8
  • 8/2 versus 8/2 (no choice)
  • 8/2 versus 10/0

The experimenters used the following device:



Fig. 1. (from Jensen et al, 2007b) Illustration of the testing environment. The proposer, who makes the first choice, sits to the responder's left. The apparatus, which has two sliding trays connected by a single rope, is outside of the cages. (A) By first sliding a Plexiglas panel (not shown) to access one rope end and by then pulling it, the proposer draws one of the baited trays halfway toward the two subjects. (B) The responder can then pull the attached rod, now within reach, to bring the proposed food tray to the cage mesh so that (C) both subjects can eat from their respective food dishes (clearly separated by a translucent divider)

Results indicate the chimps behave like Homo Economicus:
responders did not reject unfair offers when the proposer had the option of making a fair offer; they accepted almost all nonzero offers; and they reliably rejected only offers of zero (Jensen et al.)


As the authors conclude, "one of humans' closest living relatives behaves according to traditional economic models of self-interest, unlike humans, and t(...) does not share the human sensitivity to fairness."

So Homo Economicus would be a better picture of nature, red in tooth and claw? Yes and no. In another recent paper, Brosnan et al. studied the endowment effect in chimpanzees. The endowment effect is a bias that make us placing a higher value on objects we own relative to objects we do not. Well, chimps do that too. While they usually are indifferent between peanut butter and juice, once they "were given or ‘endowed’ with the peanut butter, almost 80 percent of them chose to keep the peanut butter, rather than exchange it for a juice bar" (from Vanderbilt news). They do not, however, have loss-aversion for non-food goods (rubber-bone dog chew toy and a knotted-rope dog toy). Another related study (Chen et al, 2006) also indicates that capuchin monkeys exhibit loss-aversion.

So there seems to be an incoherence here: chimps are both economically and non-economically rational. But this is only, as the positivists used to say, a pseudo-problem: they tend to comply with standard or 'selfish' economics in social context, but not in individual context. The difference between us and them is truly that we are, by nature, political animals. Our social rationality requires reciprocity, negotiation, exchange, communication, fairness, cooperation, morality, etc., not plain selfishness. Chimps do cooperate and exhibit a slight taste for fairness (see section 1 of this post), but not in human proportions.


Related post:

Reference:



10/4/07

Social Neuroeconomics: A Review by Fehr and Camerer

Ernst Fehr and Colin Camerer, two prominent experimental/behavioral/neuro-economists published a new paper in Trends in Cognitive Science on social neuroeconomics. Discussing many studies (this paper is a state-of-the-art review), they conclude that

social reward activates circuitry that overlaps, to a surprising degree, with circuitry that anticipates and represents other types of rewards. These studies reinforce the idea that social preferences for donating money, rejecting unfair offers, trusting others and punishing those who violate norms, are genuine expressions of preference

The authors illustrate this overlap with a the following picture: social and non-social reward elicit similar neural activation (see references for all cited studies at the end of this post):



Figure 1. (from Fehr and Camerer, forthcoming). Parallelism of rewards for oneself and for others: Brain areas commonly activated in (a) nine studies of social reward (..), and (b) a sample of six studies of learning and anticipated own monetary reward (..).

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents's strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.


Related posts:

Reference:
  • Fehr, E. and Camerer, C.F., Social neuroeconomics: the neural circuitry of social preferences, Trends Cogn. Sci. (2007), doi:10.1016/j.tics.2007.09.002


Studies of social reward cited in Fig. 1:

  • [26] J. Rilling et al., A neural basis for social cooperation, Neuron 35 (2002), pp. 395–405.
  • [27] J.K. Rilling et al., Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways, Neuroreport 15 (2004), pp. 2539–2543.
  • [28] D.J. de Quervain et al., The neural basis of altruistic punishment, Science 305 (2004), pp. 1254–1258.
  • [29] T. Singer et al., Empathic neural responses are modulated by the perceived fairness of others, Nature 439 (2006), pp. 466–469
  • [30] J. Moll et al., Human fronto-mesolimbic networks guide decisions about charitable donation, Proc. Natl. Acad. Sci. U. S. A. 103 (2006), pp. 15623–15628.
  • [31] W.T. Harbaugh et al., Neural responses to taxation and voluntary giving reveal motives for charitable donations, Science 316 (2007), pp. 1622–1625.
  • [32] Tabibnia, G. et al. The sunny side of fairness – preference for fairness activates reward circuitry. Psychol. Sci. (in press).
  • [55] T. Singer et al., Brain responses to the acquired moral status of faces, Neuron 41 (2004), pp. 653–662.
  • [56] B. King-Casas et al., Getting to know you: reputation and trust in a two-person economic exchange, Science 308 (2005), pp. 78–83.

Studies of learning and anticipated own monetary reward cited in Fig. 1:

  • [33] S.M. Tom et al., The neural basis of loss aversion in decision-making under risk, Science 315 (2007), pp. 515–518.
  • [61] M. Bhatt and C.F. Camerer, Self-referential thinking and equilibrium as states of mind in games: fMRI evidence, Games Econ. Behav. 52 (2005), pp. 424–459.
  • [73] P.K. Preuschoff et al., Neural differentiation of expected reward and risk in human subcortical structures, Neuron 51 (2006), pp. 381–390.
  • [74] J. O’Doherty et al., Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science 304 (2004), pp. 452–454.
  • [75] E.M. Tricomi et al., Modulation of caudate activity by action contingency, Neuron 41 (2004), pp. 281–292.



10/3/07

What is Wrong with the Psychology of Decision-Making?

In psychology textbooks, decision-making figures most of the time in the last chapters. These chapters usually acknowledge the failure of the Homo economicus model and propose to understand human irrationality as the product of heuristics and biases, that may be rational under certain environmental conditions. In a recent article, H.A. Gintis documents this neglect:

(…) a widely used text of graduate- level readings in cognitive psychology, (Sternberg & Wagner, 1999) devotes the ninth of eleven chapters to "Reasoning, Judgment, and Decision Making," offering two papers, the first of which shows that human subjects generally fail simple logical inference tasks, and the second shows that human subjects are irrationally swayed by the way a problem is verbally "framed" by the experimenter. A leading undergraduate cognitive psychology text (Goldstein, 2005) placed "Reasoning and Decision Making" the last of twelve chapters. This includes one paragraph describing the rational actor model, followed by many pages purporting to explain why it is wrong. (…) in a leading behavioral psychology text (Mazur, 2002), choice is covered in the last of fourteen chapters, and is limited to a review of the literature on choice between concurrent reinforcement schedules and the capacity to defer gratification (Gintis, 2007, pp. 1-2)
Why? The standard conception of decision-making in psychology can be summarized by two claims, one conceptual, one empirical. Conceptually, the standard conception holds that decision-making is a separate topic: it is one of the subjects that psychologists may study, together with categorization, inference, perception, emotion, personality, etc. As Gintis showed, decision-making has its own chapters (usually the lasts) in psychology textbooks. On the empirical side, the standard conception construes decision-making is an explicit deliberative process, such as reasoning. For instance, in a special edition of Cognition on decision-making (volume 49, issues 1-2, Pages 1-187), one finds the following claims:

Reasoning and decision making are high-level cognitive skills […]
(Johnson-Laird & Shafir, 1993, p. 1)

Decisions . . . are often reached by focusing on reasons that justify the selection of one option over another

(Shafir et al., 1993, p. 34)

Hence decision-making is studied mostly by multiple-choice tests using the traditional paper and pen method, which clearly suggests that deciding is considered as an explicit process. Psychological research thus assumes that the subjects’ competence in probabilistic reasoning as revealed by these tests is a good description of their decision-making capacities.

These two claims, however, are not unrelated. Since decision-making is a central, high-level faculty that stands between perception and action, it can be studied in isolation. They constitute a coherent whole, something that philosophers of science would call a paradigm. This paradigm is built around a particular view of decision-making (and more generally, cognition) that could be called “cogitative”:

Perception is commonly cast as a process by which we receive information from the world. Cognition then comprises intelligent processes defined over some inner rendition of such information. Intentional action is glossed as the carrying out of commands that constitute the output of a cogitative, central system. (Clark, 1997, p. 51)


In another post, I'll present an alternative to the Cogitative conception, based on research in neuroeconomics, robotics and biology.

You can find Gintis's article on his page, together with other great papers.

References

  • Clark, A. (1997). Being There : Putting Brain, Body, and World Together Again. Cambridge, Mass.: MIT Press.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Goldstein, E. B. (2005). Cognitive Psychology : Connecting Mind, Research, and Everyday Experience. Australia Belmont, CA: Thomson/Wadsworth.
  • Johnson-Laird, P. N., & Shafir, E. (1993). The Interaction between Reasoning and Decision Making: An Introduction. Cognition, 49(1-2), 1-9.
  • Mazur, J. E. (2002). Learning and Behavior (5th ed.). Upper Saddle River, N.J.: Prentice Hall.
  • Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-Based Choice. Cognition, 49(1-2), 11-36.
  • Sternberg, R. J., & Wagner, R. K. (1999). Readings in Cognitive Psychology. Fort Worth, TX: Harcourt Brace College Publishers.



10/1/07

The Rationality of Soccer Goalkeepers


(source: Flickr)


A study in the Journal of Economic Psychology analyzes soccer goalkeepers penalty decision-making: jump left, jump right, or stay in the center. Bar-Eli et al.'s study of

(... ) 286 penalty kicks in top leagues and championships worldwide shows that given the probability distribution of kick direction, the optimal strategy for goalkeepers is to stay in the goal's center.
The probability of stopping a penalty kick are the following:


Why do goalkeeper jump left of right, when the optimal nash-equilibirum strategy is is to stay in the goal's center? Because jumping is the norm, and thus

(...) a goal scored yields worse feelings for the goalkeeper following inaction (staying in the center) than following action (jumping), leading to a bias for action.
This study illustrates the tension between internal(subjective) and external (objective) rationality discussed in my last post: statistically speaking, as a rule for winning games, to jump is (externally) suboptimal; but given the social norm and the associated emotional feeling, jumping is (internally) rational. Note also how modeling the game is important for normative issue: two other studies, (Palacios-Huerta, 2003; Chiappori et al., 2002) concluded that goalkeepers play a rational strategy, but they supposed that shooter and goalkeeper had only two options, (kick/jump) left or right. Bar-Eli et al. added (kick/stay) in the center.


Reference




9/24/07

Natural Rationality for Newbies



Decision-making, as I routinely argue in this blog, must be understood as entrenched in a richer theoretical framework: Darwin’s economy-of-nature. According to this principle, animals could be modeled as economic agents and their control systems could be modeled as economic devices. All living beings are thus deciders, strategists or traders in the economy of reproduction and survival.

When he suggested that nature is an economy, Darwin paved the way for a stronger interaction between biology and economics. One of the consequences of a bio-economic approach is that decision-making becomes an increasingly important topic. The usual, commonsense construal of decision-making suggests that it is inherently tied to human characteristics, language in particular. If that is the case, then talk of animal decisions is merely metaphorical. However, behavioral ecology showed that animals and human behavior is constrained by economic parameters and coherent with the economy-of-nature principle. Neuroeconomics suggest that the neural processing follow the same logic. Dopaminergic systems drive animals to achieve certain goals while affective mechanisms place goals and action in value spaces. These systems, although they were extensively studied in humans, are not peculiar to them: humans display a unique complexity of goals and values, but this complexity relies partly on neural systems shared with many other animals: the nucleus accumbens and the amygdala, for instance are common in mammals. Brainy animals evolved an economic decision-making organ that allows them to cope with complex situations. As Gintis remarks, the complexity and the metabolic cost of central nervous systems co-evolved in vertebrates, which suggests that despite their cost, brains are designed to make adaptive decision[i].

Hence decision-making should be analyzed similarly as, and occupies an intellectual niche analogous to, the concept of cooperation. Nowadays, the evolutionary foundations, neural substrates, psychological mechanisms, formal modeling and philosophical analyses of cooperation constitutes a coherent—although not unified—field of inquiry [ii]. The nature of prosocial behavior, from kin selection to animal cooperation to human morality, is best understood by adopting a naturalistic stances that highlights both the continuity of the phenomenon and the human specificity. Biological decision-making deserves the same eclecticism.

Talking about biological decision-making comes at a certain conceptual price. As many philosophers pointed out, whenever one is describing actions and decisions, one is also presupposing the rationality of the agent[iii]. When we say that agent A chose X, we suppose that A had reasons, preferences, and so on. The default assumption is that preferences and actions are coherent: the firsts caused the seconds, and the seconds are justified by the firsts. The rationality philosophers are referring to, however, is a complex cognitive faculty, that requires language and propositional attitudes such as beliefs and desires. When animals forage their environment, select preys, patches, or mates, no one presupposes that they entertain beliefs or desires. There is nonetheless a presupposition that “much of the structure of the internal mental operations that inform decisions can be viewed as the product of evolution and natural selection”.[iv] Thus, to a certain degree, the neuronal processes concerned with the use of information are effective and efficient, otherwise natural selection would have discarded them. I shall label these presuppositions, and the mechanisms it might reveal, “natural rationality”. Natural rationality is a possibility condition for the concept of biological decision-making and the economy-of-nature principle. One needs to presuppose that there is a natural excellence in the biosphere before studying decisions and constraints.

More than a logical prerequisite, natural rationality concerns the descriptive and normative properties of the mechanisms by which humans and other animals make decisions. Most concepts of rationality take only the descriptive or the normative side, and hence tend to describe cognitive/neuronal processes without concern for their optimality, or state ideal conditions for rational behavior. For instance, while classical economics considers rational-choice theory either as a normative theory or a useful fiction, proponents of bounded rationality or ecological rationality refuse to characterize decision-making as optimization.[v] Others advocate a strong division of labor between normative and descriptive project: Tversky and Kahneman, for instance, concluded from their studies of human bounded rationality that the normative and descriptive accounts of decision-making are two separate projects that “cannot be reconciled”[vi]

The perspective I suggest here is that we should expect an overlap between normative and descriptive theories, and that the existence of this overlap is warranted by natural selection. On the normative side, we should ask what procedures and mechanisms biological agents should follow in order to make effective and efficient decision given all their constraints in the economy of nature. On the descriptive side, we must assess whether a procedure succeeds in achieving goals or, conversely, what goals could a procedure aim at achieving. If there is no overlap between norms and facts, then either norms should be reconceptualized or facts should be scrutinized: it might be the case that norms are unrealistic or that we did not identify the right goal or value.

This accounts contrasts with philosophers (e.g. Dennett or Davidson) who construe rationality as an idealization and researchers who preach the elimination of this concept because of its idealized status (evolutionary psychologists, for instance[vii]). Thus, rationality can be conceived not as an a priori postulate in economy and philosophy, but as an empirical and multidisciplinary research program. Quine once said that “creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die out before reproducing their kind”[viii]. Whether it is true for inductions is still open to debate, but I suggest that it clearly applies to decisions.

Related posts
Notes and references
  • [i] (Gintis, 2007, p. 3)
  • [ii] See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in (E. Fehr & Fischbacher, 2002; Ernst Fehr & Fischbacher, 2003; Hauser, 2006; Penner et al., 2005).
  • [iii] (Davidson, 1980; Dennett, 1987; Popper, 1994).
  • [iv] (Real, 1994, p. 4)
  • [v] (Chase et al., 1998; Gigerenzer, 2004; Selten, 2001)
  • [vi] (Tversky & Kahneman, 1986, p. s272)
  • [vii][vii] (Cosmides & Tooby, 1994)
  • [viii] (Quine, 1969, p. 126)

References

  • Chase, V. M., Hertwig, R., & Gigerenzer, G. (1998). Visions of Rationality. Trends in Cognitive Science, 2(6), 206-214.
  • Cosmides, L., & Tooby, J. (1994). Better Than Rational: Evolutionary Psychology and the Invisible Hand. The American Economic Review, 84(2), 327-332.
  • Davidson, D. (1980). Essays on Actions and Events. Oxford: Oxford University Press.
  • Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Fehr, E., & Fischbacher, U. (2002). Why Social Preferences Matter: The Impact of Non-Selfish Motives on Competition, Cooperation and Incentives. Economic Journal, 112, C1-C33.
  • Fehr, E., & Fischbacher, U. (2003). The Nature of Human Altruism. Nature, 425(6960), 785-791.
  • Gigerenzer, G. (2004). Fast and Frugal Heuristics: The Tools of Bounded Rationality. In D. Koehler & N. Harvey (Eds.), Blackwell Handbook of Judgment and Decision Making (pp. 62–88). Oxford: Blackwell.
  • Gintis, H. (2007). A Framework for the Unification of the Behavioral Sciences. Behavioral and Brain Sciences, 30(01), 1-16.
  • Hauser, M. D. (2006). Moral Minds : How Nature Designed Our Universal Sense of Right and Wrong. New York: Ecco.
  • Penner, L. A., Dovidio, J. F., Piliavin, J. A., & Schroeder, D. A. (2005). Prosocial Behavior: Multilevel Perspectives. Annual Review of Psychology, 56(1), 365-392.
  • Popper, K. R. (1994). Models, Instruments, and Truth: The Status of the Rationality Principle in the Social Sciences. In The Myth of the Framework. In Defence of Science and Rationality
  • Quine, W. V. O. (1969). Ontological Relativity, and Other Essays. New York,: Columbia University Press.
  • Real, L. A. (1994). Behavioral Mechanisms in Evolutionary Ecology: University of Chicago Press.
  • Selten, R. (2001). What Is Bounded Rationality ? . In G. Gigerenzer & R. Selten (Eds.), Bounded Rationality: The Adaptive Toolbox (pp. 13-36). MIT Press: Cambridge, MA.
  • Tversky, A., & Kahneman, D. (1986). Rational Choice and the Framing of Decisions. The Journal of Business, 59(4), S251-S278.



9/21/07

Neuroeconomics, folk-psychology, and eliminativism



conventional wisdom has long modeled our internal cognitive processes, quite wrongly, as just an inner version of the public arguments and justifications that we learn, as children, to construct and evaluate in the social space of the dinner table and the marketplace. Those social activities are of vital importance to our collective commerce, both social and intellectual, but they are an evolutionary novelty, unreflected in the brain’s basic modes of decision-making
(Churchland, 2006, p. 31).


The folk-psychological model of rationality construes rational decision-making as the product of a practical reasoning by which an agent infers, from her beliefs and desires, the right action to do. Truely, when we are asked to explain or predict actions, our intuitions lead us to describe them as the product of intentional states. In a series of studies, Malle and Knobe (Malle & Knobe, 1997, 2001) showed that folkpsychology is a language game where beliefs, desires and intentions are the main players. But using the intentional idiom does not mean that it picks out the real causes of action. This is where realist, instrumentalist and eliminativist accounts conflict. A realist account of beliefs and desires takes them to be real causal entities, an instrumentalist account treat them as useful fictions while an eliminativist account suggests that they are embedded in a faulty theory of mental functioning that should be eliminated (see Paul M. Churchland & Churchland, 1998; Dennett, 1987; Fodor, 1981). Can neuroeconomics shed light on this traditional debate in philosophy and cognitive science?

Neuroeconomics, I suggest, support an eliminativist approach of cognition. Just like contemporary chemistry does not explain combustion by a release of phlogiston (a substance supposed to exist in combustible bodies), cognitive science should stop explaining actions as the product of beliefs and desires. Behavioral regularities and neural mechanisms are sufficient to explain decision. When subjects evaluate whether or not they would buy a product, and whether or not the price seems justified, how informative is it to cite propositional attitudes as causes? The real entities involved in decision-makings are neural mechanisms involved in hedonic feelings, cognitive control, emotional modulation, conflict monitoring, planning, etc. Preferences, utility functions or practical reasoning, for instance, can explain purchasing, but they do not posit entities that can enter the “causal nexus” (Salmon, 1984). Neuroeconomics explains purchasing behavior not as an inference from beliefs-desire to action, but as a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). Prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing (Knutson et al., 2007). Hence the explanation of purchasing cites causes (brain areas) that explain the purchasing behavior as the product of a higher activation in prefrontal area and that justifies the decision to purchase: the agents had a stronger incentive to buy. A fully mechanistic account would, of course, details the algorithmic process performed by each area.
The belief-desire framework implicitly supposes that the causes of an action are those that an agent would verbally express when asked to justify her action. But on what grounds can this be justified?

Psychological and neural studies suggest rather a dissociation between the mechanisms that lead to actions and the mechanisms by which we explain them. Since Nisbett & Wilson (1977) seminal studies, research in psychology showed that the very act of explaining the intentional causes of our actions is a re-constructive process that might be faulty. Subjects give numerous reasons as to why they prefer one pair of socks (or other objects) to another, but they all prefer the last one on the right. The real explanation of their preferences is a position effect, or right-hand bias. For some reason, subjects pick the right-hand pair and, post hoc, generate an explanation for this preference, a phenomena widely observed. For instance, when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and neuroimaging studies mirrored this branding effect. Sensory information (taste) and cultural information (brand) are associated with different areas that interact so as to bias preferences. Without the label, the drink evaluation relies solely on sensory information. Subjects may motivate their preferences for one beverage over another with many diverse arguments, but the real impact on their preference is the brand’s label. The conscious narrative we produce when rationalizing our actions are not “direct pipeline[s] to nonconscious mental processes” (Wilson & Dunn, 2004, p. 507) but approximate reconstructions. When our thoughts occur before the action, when they are consistent with the action and appear as the only cause of the action, we infer that these thoughts are the causes of the actions, and rule out other internal or external causes (Wegner, 2002). But the fact that we rely on the belief-desire framework to explain our and others’ action as the product of intentional states do not constitute an argument for considering that these states are satisfying causal explanation of action.

The belief-desire framework might be a useful conceptual scheme for fast and frugal explanations, but it does not make folkpsychological constructs suitable for scientific explanation. In the same vein, if folkbiology would be the sole foundation of biology, whales would still be categorized as fish. The nature of the biological world is not explained by our (faulty and biased) folkbiology, but by making explicit the mechanism of natural selection, reproduction, cellullar growth, etc. There is no reason to believe that our folkpsychology is a better description of mental mechanisms. Beliefs, desires and intentions are folk-psychological constructs that have no counterpart in neuroscience. Motor control and action planning, for instance, are explained by different kinds of representation such as forward and inverse models, not propositional attitudes (Kawato & Wolpert, 1998; Wolpert & Kawato, 1998). Consequently, the fact that we rely on fokpsychology to explain actions does not constitute an argument for considering that this naïve theory provides reliable explanations of actions. Saying that the sun rises every morning is a good prediction, it could explains why there is more heat and light at noon, but the effectiveness of the sun-rising framework does not justifies its use as a scientific theory.

As many philosophers of science suggested, a genuine explanation is mechanistic: it consists in breaking a system in parts and process, and explaining how these parts and processes cause the system to behave the way it does (Bechtel & Abrahamsen, 2005; Craver, 2001; Machamer et al., 2000). Folkpsychology may save the phenomena, it still does not propose causal parts and processes. More generally, the problem with the belief-desire framework is that it is a description of our attitude toward things we call "agent", not a description of what constitutes the true nature of agents. Thus, it conflates the map and the territory. Moreover, conceptual advance is made when objects are described and classified according to their objective properties. A chemical theory that classifies elements according to their propensity to quench thirst would be a non-sense (although it could be useful in other context). At best, the belief-desire framework could be considered as an Everyday Handbook of Intentional Language.

References

  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.
  • Churchland, P. M. (2006). Into the Brain: Where Philosophy Should Go from Here. Topoi, 25(1), 29-32.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Craver, C. F. (2001). Role Functions, Mechanisms, and Hierarchy. Philosophy of Science, 68, 53-74.
  • Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Fodor, J. A. (1981). Representations : Philosophical Essays on the Foundations of Cognitive Science (1st MIT Press ed.). Cambridge, Mass.: MIT Press.
  • Kawato, M., & Wolpert, D. M. (1998). Internal Models for Motor Control. Novartis Found Symp, 218, 291-304; discussion 304-297.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking About Mechanisms. Philosophy of Science, 67, 1-24.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Malle, B. F., & Knobe, J. (2001). The Distinction between Desire and Intention: A Folk-Conceptual Analysis. In B. F. M. L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 45-67). Cambridge, MA: MIT Press.
  • McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84, 231-259.
  • Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the World. Princeton, N.J.: Princeton University Press.
  • Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
  • Wilson, T. D., & Dunn, E. W. (2004). Self-Knowledge: Its Limits, Value, and Potential for Improvement. Annual Review of Psychology, 55(1), 493-518.
  • Wolpert, D. M., & Kawato, M. (1998). Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11(7-8), 1317.



9/17/07

Rational performance and behavioral ecology

The oeconomy of nature is in this respect exactly of a piece with what it is upon many other occasions. With regard to all those ends which, upon account of their peculiar importance, may be regarded, if such an expression is allowable, as the favourite ends of nature, she has constantly in this manner not only endowed mankind with an appetite for the end which she proposes, but likewise with an appetite for the means by which alone this end can be brought about, for their own sakes, and independent of their tendency to produce it. Thus self-preservation, and the propagation of the species, are the great ends which Nature seems to have proposed in the formation of all animals. Mankind are endowed with a desire of those ends, and an aversion to the contrary; with a love of life, and a dread of dissolution; with a desire of the continuance and perpetuity of the species, and with an aversion to the thoughts of its intire extinction. But though we are in this manner endowed with a very strong desire of those ends, it has not been intrusted to the slow and uncertain determinations of our reason, to find out the proper means of bringing them about.
- Adam Smith, (1759)


One of the main goal of this blog (and the research that feeds it) is the development of a coherent and rigourous naturalized theory of rationality--or more precisely, a theory of natural rationality (I'll discuss the difference in another post). This theory-in-progress construes rationality as a natural feature of the biological world (and yes, fellows philosophers, I deal with the question of normativity, but another day). The big picture is the following: there is a distiction between rational competence and rational performance (as in linguistics): the competence is the set of mechanisms that make rational performance (= rational actions) possible. Neuroeconomics, as I see it, is the most promising research program that attempt to decipher the rational competence. An interesting possibility raised by these researches is that neural mechanisms involved in rational comptence may not be uniquely humans. In my phd thesis (in French, pdf here), I proposed that the whole vertebrate clade should be considered as the natural kind that implements the category "rational agents". I blogged a lot about neuroeconomics (competence), so today I'll talk about performance. How do animals behave rationally? In a basic, utility-maximizing sense: they optimize a utility function. While research in behavioral ecology showed that this hypothesis is justified, neureoconomics shows that it is more than an 'as-if' hypothesis or a useful fiction. So let's talk about behavioral ecology and rational performance.

Behavioral ecology models animals as economic agents that achieve ultimate goals (survival and reproduction) through instrumental ones (partner selection, food acquisition and consumption, etc.)[1]. Optimal foraging theory, for instance, represents foraging as a maximization of net caloric intake. With general principles derived from microeconomics, optimization theory and control theory, coupled with information about the physical constitution and ecological niche of the predator, it is possible to predict what kind of prey and patch an animal will favor, given certain costs such as search, identification, procurement, and handling costs. Optimal foraging theory (OFT), as their founders suggested, tries to determine “which patches a species would feed and which items would form its diet if the species acted in the most economical fashion”[2]. OFT models primarily animals as efficient goal-seekers and goal-achievers.
OFT thus incorporates agents, their choices, the currency to be maximized (most of the time a caloric gain) and a set of constraints. Most researches study where to forage (patch choice), what to forage (prey choice) and for how long (optimal time allocation). It is supposed that the individual animal make a series of decisions in order to solve a problem of sequential optimization. An animal looking for nutrients must maximize its caloric intake while taking into account those spent in seeking and capturing its prey; to this problem one must also add, among others, the frequency of prey encounter, the time devoted to research and the calories each prey type afford. All these parameters can be represented by a set of equations from which numerical methods such as dynamic programming allow biologists to derive algorithms that an optimal forager would implement in order to maximize the caloric intake. These algorithms are used afterward for the prediction of the behavior. Mathematically speaking, OFT is the translation of decision theory axioms—together with many auxiliary hypotheses—into tractable calories-maximization algorithms.
Economic models of animal behavior succeeded in explanation and prediction. It predicts for example how birds split their time between defending a territory and foraging[3], or between singing and foraging[4]. In their meta-analysis, Sih and Christensen[5] re-examined 134 foraging studies in laboratory and natural context, experimental or observational, and concluded that, although predictive success is not perfect, the predictivity of the theory is relatively high when preys are motionless (the prey can be a plant, seeds, honey, etc).
Interactive contexts are aptly modeled by game theory, mainly social foraging, fighting and predatory-preys relations[6]. For example, a model of Vickery et al[7] predicted that the co-occurrence of three social foraging strategies, producer (gathering nutrients) scrounger (stealing nutrients) and opportunist (switching between producer and scrounger) occurs only in the—very improbable—case where the losses opportunists would incur while foraging would be exactly equivalent to the profit of stealing. The model, however, predicts certain distributions of pairs of strategies that constitute evolutionary stable strategies (ESS), that is, a strategy that cannot be invaded by any competing alternative strategy. The proportion of food patch shared by scroungers, the size of the group and the degree of compatibility between the scrounger and producer strategy (i.e., if it is easy for the animal to perform both activities) determine the distribution of the strategies in a population, which was confirmed, inter alia, in birds (Lonchura punctulata)[8]. As predicted by the model, the producer strategy becomes less common when the cost of individual foraging increases.
Recently, behavioral ecologists found that animals could also be modeled as traders in biological markets. Obviously, biological markets do not have symbolic and conventional currencies systems, but in many interactions between animals institute trading structures. As soon as agents are able to provide commodities for mutual profit, the competition for obtaining commodities creates a higher bid. Animals seek and select partners according to the principle of supply and demand in interspecific mutualism, mate selection and intraspecific co-operation. An example of the last type is the cleaning market instituated by Hipposcarus harid fishes and cleaners-fishes Labroides dimidiatus. The “customers” (Hipposcarus) use the services of the cleaner to have its parasites removed, whereas the cleaners, occasionally, cheat and eat the healthy tissues of its customers. Since the cleaners offer a service that cannot be found elsewhere, they benefit from a certain economic advantage. A customer cannot choose to be exploited or not, whereas the cleaner chooses to cooperate or not (thus the payoffs are asymmetric). The customer—a predator fish that could eat the cleaner—abstains from consuming the cleaner in the majority of the cases, given the reciprocal advantage. Bshary and Schaffer[9] observed that cleaners spend more time with occasional customers than with regular ones and fight for them, since occasional customers are easier to exploit. All this makes perfect economic sense.
One could of course reformulate each of these results, and put the words decision or exchange between quotation marks to imply that they are mere façons de parler and not really decisions and exchanges, but instinctive behaviors preserved by natural selection. If this is the case, we should also put quotation marks when we talk about human beings: human behavioral ecology applies the same bio-economic logic with the same success to humans. Agents are modeled as optimal forager subjects to a multitude of constraints. Given available resources in the environment of a community, one can generates a model that predicts the optimal allocation of resources. These models are of course more complex than animal ones since they integrate social parameters like local habits, technology or economic structures. Models of human foraging where able for instance to explain differences in foraging style between tribes in the Amazonia, given the distance to be traversed and the technology used[10]. Food sharing, labor division between men and women, agricultural cultures and even Internet browsing (where the commodity is information) can be modeled by human behavioral ecology[11]. Hence even if foraging or trading behaviors are merely the execution of adaptations, the fact remains that their performance is best described as a decision-making process.


[1] (Krebs & Davies, 1997; Pianka, 2000)
[2] (MacArthur & Pianka, 1966, p. 603).
[3] (Kacelnik, Houston, & Krebs, 1981),
[4] (Thomas, 1999)
[5] (Sih & Christensen, 2001)
[6] (Hansen, 1986; Lima, 2002).(Dugatkin & Reeve, 1998)
[7] (Vickery, Giraldeau, Templeton, Kramer, & Chapman, 1991)
[8] (Mottley & Giraldeau, 2000)
[9] (Bshary & Schaffer, 2002)
[10] (Hames & Vickers, 1982)
[11] (Jochim, 1988; Kaplan, Hill, Hawkes, & Hurtado, 1984; Pirolli & Card, 1999)


Bshary, R., & Schaffer, D. (2002). Choosy reef fish select cleaner fish that provide high-quality service. Animal Behaviour, 63(3), 557.
Dugatkin, L. A., & Reeve, H. K. (1998). Game theory & animal behavior. New York ; Oxford: Oxford University Press.
Hames, R. B., & Vickers, W. T. (1982). Optimal Diet Breadth Theory as a Model to Explain Variability in Amazonian Hunting. American Ethnologist, 9(2, Economic and Ecological Processes in Society and Culture), 358-378.
Hansen, A. J. (1986). Fighting Behavior in Bald Eagles: A Test of Game Theory. Ecology, 67(3), 787-797.
Jochim, M. A. (1988). Optimal Foraging and the Division of Labor. American Anthropologist, 90(1), 130-136.
Kacelnik, A., Houston, A. I., & Krebs, J. R. (1981). Optimal foraging and territorial defence in the Great Tit (Parus major). Behavioral Ecology and Sociobiology, 8(1), 35.
Kaplan, H., Hill, K., Hawkes, K., & Hurtado, A. (1984). Food Sharing Among Ache Hunter-Gatherers of Eastern Paraguay. Current Anthropology, 25(1), 113-115.
Krebs, J. R., & Davies, N. B. (1997). Behavioural ecology : an evolutionary approach (4th ed.). Oxford, England ; Malden, MA: Blackwell Science.
Lima, S. L. (2002). Putting predators back into behavioral predator-prey interactions. Trends in Ecology & Evolution, 17(2), 70.
MacArthur, R. H., & Pianka, E. R. (1966). On optimal use of a patchy environment. American Naturalist(100), 603-609.
Mottley, K., & Giraldeau, L. A. (2000). Experimental evidence that group foragers can converge on predicted producer-scrounger equilibria. Anim Behav, 60(3), 341-350.
Pianka, E. R. (2000). Evolutionary ecology (6th ed.). San Francisco, Calif.: Benjamin Cummings.
Pirolli, P., & Card, S. (1999). Information Foraging. Psychological Review, 106(4), 643.
Sih, A., & Christensen, B. (2001). Optimal diet theory: when does it work, and when and why does it fail? Animal Behaviour, 61(2), 379.
Smith, A. ([1759] 2002). The theory of moral sentiments. Cambridge, U.K. ; New York: Cambridge University Press.
Thomas, R. J. (1999). Two tests of a stochastic dynamic programming model of daily singing routines in birds. Anim Behav, 57(2), 277-284.
Vickery, W. L., Giraldeau, L.-A., Templeton, J. J., Kramer, D. L., & Chapman, C. A. (1991). Producers, Scroungers, and Group Foraging. American Naturalist, 137(6), 847-863.



9/16/07

Natural Irrationality. How judgement and decision-making can go wrong

Lifehack has an interesting post about "7 Stupid Thinking Errors You Probably Make". Readers of this blog might be already familiar with these (confirmation bias, recency effects, etc.), so here a the full list from Wikipedia:

  • Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behaviour, and manias.
  • Base rate fallacy
  • Bias blind spot — the tendency not to compensate for one's own cognitive biases.
  • Choice-supportive bias — the tendency to remember one's choices as better than they actually were.
  • Confirmation bias — the tendency to search for or interpret information in a way that confirms one's preconceptions.
  • Congruence bias — the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.
  • Contrast effect — the enhancement or diminishment of a weight or other measurement when compared with recently observed contrasting object.
  • Déformation professionnelle — the tendency to look at things according to the conventions of one's own profession, forgetting any broader point of view.
  • Endowment effect — "the fact that people often demand much more to give up an object than they would be willing to pay to acquire it".
  • Extreme aversion — the tendency to avoid extremes, being more likely to choose an option if it is the intermediate choice.
  • Focusing effect — prediction bias occurring when people place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.
  • Framing — by using a too narrow approach or description of the situation or issue.
  • Hyperbolic discounting — the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, the closer to the present both payoffs are.
  • Illusion of control — the tendency for human beings to believe they can control or at least influence outcomes that they clearly cannot.
  • Impact bias — the tendency for people to overestimate the length or the intensity of the impact of future feeling states.
  • Information bias — the tendency to seek information even when it cannot affect action.
  • Irrational escalation — the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Loss aversion — "the disutility of giving up an object is greater than the utility associated with acquiring it".(see also sunk cost effects and Endowment effect).
  • Mere exposure effect — the tendency for people to express undue liking for things merely because they are familiar with them.
  • Need for closure — the need to reach a veredict in important matters; to have an answer and to escape the feeling of doubt and uncertainty. The personal context (time or social pressure) might increase this bias.
  • Neglect of probability — the tendency to completely disregard probability when making a decision under uncertainty.
  • Omission bias — The tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).
  • Outcome bias — the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
  • Planning fallacy — the tendency to underestimate task-completion times.
  • Post-purchase rationalization — the tendency to persuade oneself through rational argument that a purchase was a good value.
  • Pseudocertainty effect — the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.
  • Reactance - the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice.
  • Selective perception — the tendency for expectations to affect perception.
  • Status quo bias — the tendency for people to like things to stay relatively the same (see also Loss aversion and Endowment effect).
  • Unit bias — the tendency to want to finish a given unit of a task or an item with strong effects on the consumption of food in particular
  • Von Restorff effect — the tendency for an item that "stands out like a sore thumb" to be more likely to be remembered than other items.
  • Zero-risk bias — preference for reducing a small risk to zero over a greater reduction in a larger risk.