Natural Rationality | decision-making in the economy of nature

10/29/07

Sciencehack

Another way to sort the wheat from the shaft: a "scientific Youtube", sciencehack where science videos are selected and tagged.

See for instance:



BPR3 and a Special Issue of Science on Decision-Making

BPR3 refers to Blogging on Peer Reviewed Research Reporting; bloggers who write "serious academic blog posts about peer-reviewed research" are invited to identify those posts by displaying an icon. The project "came about because several academic bloggers in different fields saw the need to distinguish their "serious" writing from news, politics, family, bagpipes, and so on." See instructions at http://bpr3.org. The idea is to have a mean to sort the wheat from the shaft, and also to aggregate those posts.

Also, a great special issue of Science on Decision-making:

  • Anterior Prefrontal Function and the Limits of Human Decision-Making
    Etienne Koechlin, Alexandre Hyafil
  • Social Decision-Making: Insights from Game Theory and Neuroscience
    Alan G. Sanfey
  • Decision-Making Dysfunctions in Psychiatry—Altered Homeostatic Processing?
    Martin P. Paulus
  • Decision Theory: What “Should” the Nervous System Do?
    Konrad Körding



Vilayanur Ramachandran: A journey to the center of your mind

A new TED talk by Vilayanur Ramachandran:

http://www.ted.com/talks/view/id/184




In a wide-ranging talk, Vilayanur Ramachandran explores how brain damage can reveal the connection between the internal structures of the brain and the corresponding functions of the mind. He talks about phantom limb pain, synesthesia (when people hear color or smell sounds), and the Capgras delusion, when brain-damaged people believe their closest friends and family have been replaced with imposters
.



10/24/07

Stich on Morality and Cognition

Stephen Stich, one of the most experimentally-oriented philosophers of these days, recently gave a series of talk in Paris, entitled " Moral Theory Meets Cognitive Science: How the Cognitive Science Can Transform Traditional Debates" You can watch the videos of 4 talks online:



10/23/07

Kinds of Philosophical Moral Psychologies

Usually, moral philosophy oscillate between Hume and Kant; emotional utilitarism and rational deontologism. Hauser, in Moral Minds, add another perspective, a "rawlsian" one. I found a nice graphical depiction of these models:




"event perception triggers an analysis of the causal and intentional properties underlying the relevant actions and their consequences. This analysis triggers, in turn, a moral judgment that will, most likely, trigger the systems of emotion and conscious reasoning. The single most important difference between the Rawlsian model and the other two is that emotions and conscious reasoning follow from the moral judgment as opposed to being causally responsible for them."

- Hauser, M. D. (2006). The liver and the moral organ. Soc Cogn Affect Neurosci, 1(3), 214-220.


Also, in "The Case for Nietzschean Moral Psychology", Knobe et Leiter constrast Arisotle, Kant and Nietzsche on moral psychology. It turns out that human are more Nietzschean than we thought!

Anyone knows another great philosopher who gave his/her name to a moral psychology?



Exploration, Exploitation and Rationality

A little introduction to what I consider to be the Mother of All Problems: the exploration-exploitation trade-off.

Let's firt draw a distinction between first- and second-order uncertainty. Knowing that a source of reward (or money, or food, etc.) will be rewarding in 70% of the occasions is uncertain knowledge because one does not know for sure what will be the next outcome (one can only know that there is a 70% probability that it is a reward). In some situations however, uncertainty can be radical, or second-order uncertainty: even the probabilities are unknown. Under radical uncertainty, cognitive agents must learn reward probabilities. Learners must, at the same time, explore their environment in order to gather information about its payoff structure and exploit this information to obtain reward. They face a deep problem—known as the exploration/exploitation tradeoff—because they cannot do both at the same time: you cannot explore all the time, you cannot exploit all the time, you must reduce exploration but cannot eliminate it. This tradeoff is usually modeled with the K-armed bandit problem.

Suppose an agent has n coins to spend in a slot machine with K arms (here K=2 and we will suppose that one arm is high-paying and the other low-paying, although the agent does not know that). The only way the agent has access to the arms’ rate of payment – and obtains reward – is by pulling them. Hence she must find an optimal tradeoff when spending its coins: trying another arm just to see how it pays or staying with the one who already paid? The goal is not only to maximize reward, but also to maximize reward while obtaining information about the arm’s rate. The process can be erroneous in two different ways: either the player can be victim of a false negative (a low-paying sequence of the high-paying arm) or false positive (a high-paying sequence paying of the low-paying paying arm).

To solve this problem, the optimal solution is to compute an index for every arm, updating this index according to the arm’s payoff and choosing the arm that has the greater index (Gittins, 1989). In the long run, this strategies amount to following decision theory after a learning phase. But as soon as switching from one arm to another has a cost, as Banks & Sundaram (1994) showed, the index strategies cannot converge towards an optimal solution. A huge literature in optimization theory, economics, management and machine learning addresses this problem (Kaelbling et al., 1996; Sundaram, 2003; Tackseung, 2004). Studies of humans or animals explicitly submitted to bandit problems, however, show that subjects tend to rely on the matching strategy (Estes, 1954). They match the probability of action with the probability of reward. In one study, for instance, (Meyer & Shi, 1995), subjects were required to select between two icons displayed on a computer screen; after each selection, a slider bar indicated the actual amount of reward obtained. The matching strategy predicted the subject’s behavior, and the same results hold for monkeys in a similar task (Bayer & Glimcher, 2005; Morris et al., 2006).

The important thing with this trade-off, is its lack of a priori solutions. Decision theory works well when we know the probabilities and the utilities, but what can we do when we don’t have them? We learn. This is the heart of natural rationality: crafting solutions—under radical uncertainty and non-stationary environments—for problems that may not have an optimal solution. Going from second- to first-order uncertainty.



See also:


References

  • Banks, J. S., & Sundaram, R. K. (1994). Switching Costs and the Gittins Index. Econometrica: Journal of the Econometric Society, 62(3), 687-694.
  • Bayer, H. M., & Glimcher, P. W. (2005). Midbrain Dopamine Neurons Encode a Quantitative Reward Prediction Error Signal. Neuron, 47(1), 129.
  • Estes, W. K. (1954). Individual Behavior in Uncertain Situations: An Interpretation in Terms of Statistical Association Theory. In R. M. Thrall, C. H. Coombs & R. L. Davies (Eds.), Decision Processes (pp. 127-137). New York: Wiley.
  • Gittins, J. C. (1989). Multi-Armed Bandit Allocation Indices. New York: Wiley.
  • Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4, 237-285.
  • Meyer, R. J., & Shi, Y. (1995). Sequential Choice under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem. Management Science, 41(5), 817-834.
  • Morris, G., Nevet, A., Arkadir, D., Vaadia, E., & Bergman, H. (2006). Midbrain Dopamine Neurons Encode Decisions for Future Action. Nat Neurosci, 9(8), 1057-1063.
  • Sundaram, R. K. (2003). Generalized Bandit Problems: Working Paper, Stern School of Business.
  • Tackseung, J. (2004). A Survey on the Bandit Problem with Switching Costs. De Economist, V152(4), 513-541.
  • Yen, G., Yang, F., & Hickey, T. (2002). Coordination of Exploration and Exploitation in a Dynamic Environment. International Journal of Smart Engineering System Design, 4(3), 177-182.



10/21/07

Why Evolutionary Psychology?

An undergrad recently contacted me and asked me how I got interested by evolutionary psychology. Here is my answer, if that can be of any use for anybody. The ideas my apply to academic research more generally.

As an undergrad philosophy student, my interest in evolutionary psychology was triggered by one of my teacher, a great scholar who was able to integrate biology, psychology, philosophy, neuroscience, anthropology, etc, in his reflexion. That is really what drove me in the field: someome who showed me how any question about the cognition and rationality can be approached in a darwinian perspective. Reading Dennett's "Darwin's dangerous ideas" also had the same effect on me. Hence to be honest, at first, it's because it was just so cool see things this way, and exciting to know that research may involve knowledge in many different fields. If that is cause, that is not, however, the reason. The reason, I think, was a deep commitment to materialism (the world is made of matter, period), naturalism (all facts are natural fact, there is no other realm of fact) and experimentalism in general (we should back any claim with scientific, experimental data). All that naturally leads to an evolutionary approach of anything. At your age (god I sound old when I say that! ), I was determined to do philosophy; to do my BA, MA, PhD, postodoc, and everything that was necessary to, one day, have a job in a university (I am not quite there, as I am still postdoc, but I hope one day I'll get one). So evolutionary psychology and other related fields appeared to me as a great opportunity to develop my "academic niche", to have my own speciality, and to do something interdisciplinary. So if I can give you an advice, it will be a very simple one: do something you really like. If just thinking about evolutionary psychology evokes a lot of ideas of questions, if you are thrilled by every paper or book you read about it, go ! It's easy to go through a thesis and all the other academic stuff when you like it. My second advice is that if you really like it, then read everything about it, from the classics to ongoing research to popular books; explore the connections between evolutionary psychology and other field (how it's related to economics? neuroscience? sociology?). Browse, dowload, print everything you can. Use RSS to syndicate important journals. Find those journals (Evolution and Human Behavior, Human Nature, etc.). Don't forget the holy trilogy: Nature, Science and PNAS. Be up-to-date and aware of the field's common knowledge.

If you like it, it will be easy for you to master the field. Try to find a supervisor that knows evolutionary psychology, who already published in the field. Make contact with other people, or students, interested by these topics. Try organizing reading groups, attend to conferences, become a member of scientific societies, etc. Don't miss encyclopedia entries.

What I like best about theis field? everything. What I like least? Nothing. Except maybe people who will try to show you that evolution is "just a theory", that evolutionary psychology is an evil attempt to eliminate "meanings" in our lives, blah blah blah, all that stuff is sometimes anoying. Don't take it too seriously, but you may consider sometimes trying to argue with them, it is always useful to test the foundations of your scientific conviction.

hope this will help,
B.



10/20/07

Gains and Losses in the Brain

A new lesion study that suggests a dissociation between the neural processsing of gains and losses:

We found that individuals with lesions to the amygdala, an area responsible for processing emotional responses, displayed impaired decision making when considering potential gains, but not when considering potential losses. In contrast, patients with damage to the ventromedial prefrontal cortex, an area responsible for integrating cognitive and emotional information, showed deficits in both domains. We argue that this dissociation provides evidence that adaptive decision making for risks involving potential losses may be more difficult to disrupt than adaptive decision making for risks involving potential gains


Weller, J. A., Levin, I. P., Shiv, B., & Bechara, A. (2007). Neural Correlates of Adaptive Decision Making for Risky Gains and Losses. Psychological Science, 18(11), 958-964.



Rachlin On Rationality and Addiction

Blogging on Peer-Reviewed Research
Psychologist H. Rachlin (author of The Science of Self-Control) proposes an excellent analysis of the relationships between rationality and addiction. One the one hand, addicts are rational utility-maximizer: they use substances they like. On the other hand, destroying your life with drug is clearly irrational. He suggest first to consider rationality as "overt behavioral patterns rather than as a smoothly operating logic mechanism in the head" or "as consistency in choice" (the economic notion of rationality). He conceives rationality as "a pattern of predicting your own future behavior and acting upon those predictions to maximize reinforcement in the long run." Addicts are irrational "to the extent that they fail to make such predictions and to take such actions. "

References.
  • Rachlin Howard, In what sense are addicts irrational?, Drug and Alcohol Dependence, Volume 90, Supplement 1, Behavioral and Economic Perspectives in Drug Abuse Research, September 2007, Pages S92-S99.








10/19/07

Mindreading and Folk-psychology: A Conceptual Clarification

[An analysis of the concepts of mindreading and folk-psychology; comments welcome !]

Blogging on Peer-Reviewed Research
Human abilities in attributing intentions, predicting actions and spontaneously making sense of each others are the most complex in the animal world (Byrne & Whiten, 1988; Whiten & Byrne, 1997). Some of these abilities, however, are not unique to linguistic humans. Take for instance what Stueber (2006) call basic empathy, the “quasi-perceptual mechanism that allow us to directly recognize what another person is doing or feeling” (Stueber, 2006, p. 147). Basic empathy is involved for instance in face-based emotional recognition: from a particular pattern of face expression, we intuitively attribute an emotional state to the agent. Basic empathy can only infer feeling and actions (‘she is angry’), not mental cognitive mental states such as the reasons for being angry. This mechanisms is thought to be implemented, in large part, by the so-called “mirror neurons”, i.e., structures whose activity is elicited by action production and action understanding. It is suggested that these two process, at least for simple actions, share the same neural machinery and thus that basic interpretation requires a capacity to entertain sensorimotor simulations of actions (Gallese, 2007).
Mindreading (as in basic empathy) is thus more basic than folk-psychology, since it does not involve a conceptual framework. Fifteen-month-old infants have an intuitive grasp, for instance, of false beliefs: they predict that experimenters will look for a toy where they (the experimenters) should, given their knowledge of the situation, believe it is (Onishi & Baillargeon, 2005). Nonhuman primates (cotton-top tamarins, rhesus macaques, and chimpanzees; hereafter, ‘primates’) can also distinguish between goal-directed and accidental actions: they inspect containers that experimenters intentionally reach for and grasp but not containers merely flopped (palm facing upwards) by the experimenter (Wood et al., 2007). Hence, even without linguistic resources, infants and some primates may be able to superficially read minds, and this is what allow them to perform basic helping, such as helping an experimenter unable to reach a stick or a pencil by picking up the object and handing it to the experimenter (Warneken et al., 2007).
Contrarily to other primates, our mindreading abilities are supplemented by the rich cognitive resources afforded by a public language, such as a vocabulary for mental states and an ecological niche populated with other linguistic mind-readers that provides cultural knowledge, feedback and templates for imitation (Sterelny, 2003, p. chapter 11). These resources allow humans to go beyond basic empahthy: we can know that Anna is angry, but we can also know why. We are also able to infer mental states from meager or abstract stimuli. After a stroke who left him with a locked-in syndrome (a condition in which a patient is aware and awake yet almost completely paralyzed), French journalist Jean-Dominique Bauby (1997) was still able to communicate with other by blinking his left eyelid. Although we cannot infer his mental states from his facial expression, body language or the tone of his voice, it is relatively easy to interpret him when he wrote—thanks to a special device connected to his eyelid—that he “would have been pleased to trade [his] yellow nylon hospital gown for a plaid shirt, old pants and shapeless sweater” (1997, p. 8).
Social interaction is thus made possible both by fast, intuitive sensorimotor capacities and a rich theoretical background. We may infer an emotional states from facial expressions, but we need a structured network of mentalistic concepts—desire, beliefs, intentions, motivation, reasons—to infer that “Anna is angry because Bob wants to make fun of Charles”. In this sense, linguistic humans are folk-psychologists while nonhumans primates and babies are not: the formers, but not the latter, apply intuitive theories to predict and explain actions. Hence a distinction is drawn here between mindreading, viz. the cognitive mechanisms that process social information, and folk-psychology, viz. the network of mentalistic concepts and their inferential relationship. Mindreading is possible without, but largely augmented by, folk-psychology, just like non-humans animal can have a grasp of folk-physical notions like object without having the linguistic resources to describe objects. Although many cognitive processes are possible without language, nothing close to a theory—a structured set of propositions—is possible without it.
Folk-psychology, one might say, is the ‘language game’ of social cognition. More precisely, I would like to add, folk-psychology—as a commonsense theory—makes mindreading inferences explicit. If we would run the experiments by Wood et al. discussed above, but this time with humans instead of primates, we could easily ask subjects to tell if the experimenter’s movement is intentional or not. Experiments with questionnaires show that humans draw a clear distinction between intentional and unintentional behavior and that they show a high level of agreement about whether an action is intentional or not (Malle, 2007; Malle & Knobe, 1997). Assessing the intentionality of a movement involves the ability to draw contextual implication, i.e. a conclusion drawn from an input and a context, but neither the input nor the context is sufficient for drawing the conclusion (Wilson & Sperber, 2004). Turning a lamp on and seeing that no light comes from the lamp, I may infer that the light-bulb is burnt out. The light-bulb problem is inferred from the input (looking at the lamp) and the context (my attempt to turn the light on). Likewise, looking at the experimenter’s arm and the context of the experiment allow one to infer the intentionality of the movement, thanks to sensorimotor and simulation capacities. Humans, but not primates, can also be asked why they think the action was intentional. They can justify these contextual implications. They could say about the experimenter that “She wanted to reach the first container” or that “She believed something was in the first container”. In doing so they make explicit the structure of their implicit social-cognitive contextual implications and this, I suggest, is one of the epistemic function of folk-psychology. In other words, the folk-psychological theory expresses our endorsements of contextual implications in social interpretation. Babies and primates have basic interpretative skills and may draw certain contextual inferences but they cannot justify them. By playing the game of giving and asking for reasons, we—in the linguistic evolution of our species and in our individual cognitive development—made explicit the inferential norms of social cognition, i.e., the folk concept of intentionality.
The methodology of Malle and other social psychologists who investigate the folk concept of intentionality endorse a quasi-expressivist account of conceptual content. Their theory supposes three ‘layers’: a conceptual framework, a set of psychological processes and linguistic forms. It suggest that we all have a conceptual framework, akin to a “deep grammar” for social explanations. This framework is then expressed in linguistic forms by a set of psychological processes that govern the construction of explanations. Their studies suggest that the conceptual framework, its psychological processing and its linguistic expression is relatively similar from one individual to another (Knobe, 2006; Malle, 2001, 2007). First, as discussed above, almost everybody agrees whether an action is intentional or not, rely preferentially on causes to explain the former and reasons to explain the latter. About 70% of the intentional actions are explains by primary reasons: beliefs, desires but also valuings (e.g. “she get home late because she liked the show”). When primary reasons are not evoked, subjects use either a causal history of reasons explanation or an enabling-factor explanation. The first one explains why a person decided to do X not because of her beliefs/desires, but because of factors that bring about reasons to act: for instance, “she comes from a respectful culture”. The enabling-factor explanations cite—after the action is performed—the condition that made its performance possible without referring to the agent’s intentions or motivations (e.g. “she had two week to prepare the talk”).
In sum, the linguistic framework of folk-psychology as we use it in everyday context makes explicit a system of interpretative inferences based on reasons, causal reason histories and enabling factors organized as in the following graphics (from Malle 2007):



References


  • Bauby, J.-D. (1997). The Diving Bell and the Butterfly (1st U.S. ed.). New York: A.A. Knopf : Distributed by Random House.
  • Byrne, R. W., & Whiten, A. (1988). Machiavellian Intelligence : Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford
  • New York: Clarendon Press ;
  • Oxford University Press.
  • Gallese, V. (2007). Before and Below ‘Theory of Mind’: Embodied Simulation and the Neural Correlates of Social Cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 659-669.
  • Knobe, J. (2006). The Concept of Intentional Action: A Case Study in the Uses of Folk Psychology. Philosophical Studies, 130(2), 203-231.
  • Malle, B. F. (2001). Folk Explanations of Intentional Action. In B. F. Malle, L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality : Foundations of Social Cognition (pp. 265-286). Cambridge, Mass.: MIT Press.
  • Malle, B. F. (2007). Attributions as Behavior Explanations: Toward a New Theory. In D. Chadee & J. Hunter (Eds.), Current Themes and Perspectives in Social Psychology (pp. 3-26). St. Augustine, Trinidad: SOCS, The University of the West Indies.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Onishi, K. H., & Baillargeon, R. (2005). Do 15-Month-Old Infants Understand False Beliefs? Science, 308(5719), 255-258.
  • Sterelny, K. (2003). Thought in a Hostile World : The Evolution of Human Cognition. Malden, MA ; Oxford: Blackwell Pub.
  • Stueber, K. R. (2006). Rediscovering Empathy : Agency, Folk Psychology, and the Human Sciences. Cambridge, Mass.: MIT Press.
  • Warneken, F., Hare, B., Melis, A. P., Hanus, D., & Tomasello, M. (2007). Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biology, 5(7), e184.
  • Whiten, A., & Byrne, R. W. (1997). Machiavellian Intelligence Ii : Extensions and Evaluations. Cambridge ; New York, NY, USA: Cambridge University Press.
  • Wilson, D., & Sperber, D. (2004). Relevance Theory. In G. Ward & L. Horn (Eds.), Handbook of Pragmatics (pp. 607-632). Oxford: Blackwell.
  • Wood, J. N., Glynn, D. D., Phillips, B. C., & Hauser, M. D. (2007). The Perception of Rational, Goal-Directed Action in Nonhuman Primates. Science, 317(5843), 1402-1405.





10/18/07

What economists think about psychologists, what psychologists think about economists

;-)

What economists think about psychologists:
1. Psychologists only study rats, pigeons, college freshman, and crazy people.
2. (Perhaps due to the above,) psychologists are not very rational.

What psychologists think about economists:
1. Economists stubbornly hold to a rational model of man(kind) that (they must know) is obviously wrong.
2. Economists can never agree about what will happen to our economy.

[From http://www.decisionsciencenews.com/?p=268]



10/16/07

[PNAS] Why Sex is Good, and The Evolutionary Psychology of Animate Perception

in this week PNAS:

A glimpse at the evolutionary psychology of animal perception, by New, Cosmides and Tooby (famous evolutionary psychologists):
Visual attention mechanisms are known to select information to process based on current goals, personal relevance, and lower-level features. Here we present evidence that human visual attention also includes a high-level category-specialized system that monitors animals in an ongoing manner. Exposed to alternations between complex natural scenes and duplicates with a single change (a change-detection paradigm), subjects are substantially faster and more accurate at detecting changes in animals relative to changes in all tested categories of inanimate objects, even vehicles, which they have been trained for years to monitor for sudden life-or-death changes in trajectory. This animate monitoring bias could not be accounted for by differences in lower-level visual characteristics, how interesting the target objects were, experience, or expertise, implicating mechanisms that evolved to direct attention differentially to objects by virtue of their membership in ancestrally important categories, regardless of their current utility.

And the reason why sex makes people feeling good: it's all oxytocyn! Waldherr and Neumann showed that "sexual activity and mating with a receptive female reduce the level of anxiety and increase risk-taking behavior in male rats for several hours" (!) because "oxytocin is released within the brain of male rats during mating with a receptive female"



The Evolution of Language(s)

Blogging on Peer-Reviewed Research

A lot of recent stuff on the evolution of language (the linguistic faculty) and languages ("tongues")

There is a good article in Seed Magazine ("Science is Culture"; I like their slogan !) about recent research on the evolution of language.

Language is an innate faculty, rather than a learned behavior. This idea was the primary insight of the Chomskyan revolution that helped found the field of modern linguistics in the late 1950s, and its implications are both simple and profound. If innate, language must be genetic. It is hardwired within us from conception and evolved from structures and genes with analogues existing throughout the animal kingdom. In a sense, language is universal. Yet we humans are the only species with the ability for what may rightly be called language and, moreover, we have specific linguistic behaviors that seem to have appeared only within the past 200,000 years—an eye-blink of evolution.

Why are humans the only species to have suddenly hit upon the remarkable possibilities of language? If speech is a product of our DNA, then surely other species also have some of the same genes required for language because of our basic, shared biochemistry. One of our closest relatives should have developed something that is akin to language, or another species should have happened upon its attendant advantages through parallel evolution.

See also:
And Michael C. Corballis reviews two books on the motor origins of language here:
  • Talking Hands: What Sign Language Reveals about the Mind. Margalit Fox. Simon and Schuster, 2007.
  • The Gestural Origin of Language. David F. Armstrong and Sherman E. Wilcox. Oxford University Press, 2007
In an older article, he explains his gestural theory of language.



10/15/07

Blog Action Day: The Economies of Nature



[This post is my participation to the Blog Action Day, a day where “bloggers around the web will unite to put a single important issue on everyone’s mind - the environment. Every blogger will post about the environment in their own way and relating to their own topic”. This is Natural Rationality’s perspective on environmental issues]

In previous posts, I discussed the Darwinian concept of and “Economy of Nature”. One of the purpose of this discussion is to promote a conception of economic as a general science of natural and political (human) economies. Agents in both domains are different—money and stock market are human institutions—but they share a common principle, namely the maximization of value: whether it’s fitness, utility or reward, agents in an economy strive to optimize. Behavior ecology and decision neuroscience show, for instance, that decision-making is a natural, and common, feature of the living world. The difference between agents in natural and political economics should not be a problem: after all, zoology and botany study different forms of life but are studied by biology.

This conception has not only theoretical consequences. From a political point of view, if human economy is one of the economies of nature, the fact that it is the more sophisticated does not allow us to cause damage to animal economies, such as:
- half of the Earth's surface transformed by human action
- the concentration of carbon dioxide in the atmosphere increased by 30% since the beginning of the industrial revolution
- we produce more nitrogen than all natural sources may produce
- half of the fresh water surface is used by humanity
- almost one out of four specie of birds is threatened with extinction (Vitousek et al., 1997).

Such damages are usually construed as what economists call "externalities", i.e. costs that affect a third party, (who is external to a contract or a transaction). The third party’s utility is increased or decreased without this change be reflected in market prices. Yet in a perspective where economies and ecologies are similar, rather than a mere externality, our harmful effect on other forms of life should be understood like an economic damage: we reduce significantly the ability of metazoans agents to carry out their economic activity. Our actions must then be judged both ethically and economically, and we cannot consider animal life only as a third party external to our markets. All economies of the nature are connected, and we behave as if ours was in a prisoner's dilemma, where it is more advantageous for each agent not to cooperate, even though we would collectively benefit from cooperation. Experiments have shown that individuals agents tend to cooperate in prisoner's dilemmas , but the logic of the organizations does not reflect that of individuals. Organizations, but not individuals, behave like Homo Economicus (Cox & Hayne, 2006). In this prisoner's dilemma, our private and public organizations tend to prefer the defection (the Nash equlibrium), at the expense of environmental concerns—what Hardin (1968) famously described as the Tragedy of the Commons.
What can be done? There will be no magic solution. Firms are firms, and will be concerned by environment only if they can profit from it. We won’t change the form of markets, but maybe we can change their content. We can’t expect everybody to pay a higher price for green products, when “regular” products are less expensive. It is rational to pay less, specially when you’re not rich. But if we can market green products at a competitive price, people will buy them. If there is a financial reward in developing green products, firm will develop them. If green firms represent a capital gain, shareholders will be there. But this will require global political intervention: don’t expect the invisible hand to get greener.


Related posts

References
  • Cox, J., & Hayne, S. (2006). Barking up the Right Tree: Are Small Groups Rational Agents? Experimental Economics, 9(3), 209-222.
  • Hardin, G. (1968). The Tragedy of the Commons. Science, 162(3859), 1243-1248.
  • Vitousek, P. M., Mooney, H. A., Lubchenco, J., & Melillo, J. M. (1997). Human Domination of Earth's Ecosystems (5325), 494-499.



10/12/07

Self-control is a Scarce Resource

We all have a limited quantity of energy to allow to self-control:

In a recent study, Michael Inzlicht of the University of Toronto Scarborough and colleague Jennifer N. Gutsell offer an account of what is happening in the brain when our vices get the better of us.

Inzlicht and Gutsell asked participants to suppress their emotions while watching an upsetting movie. The idea was to deplete their resources for self-control. The participants reported their ability to suppress their feelings on a scale from one to nine. Then, they completed a Stroop task, which involves naming the color of printed words (i.e. saying red when reading the word “green” in red font), yet another task that requires a significant amount of self-control.

The researchers found that those who suppressed their emotions performed worse on the Stroop task, indicating that they had used up their resources for self-control while holding back their tears during the film.

An EEG, performed during the Stroop task, confirmed these results. Normally, when a person deviates from their goals (in this case, wanting to read the word, not the color of the font), increased brain activity occurs in a part of the frontal lobe called the anterior cingulate cortex, which alerts the person that they are off-track. The researchers found weaker activity occurring in this brain region during the Stroop task in those who had suppressed their feelings. In other words, after engaging in one act of self-control this brain system seems to fail during the next act.
http://www.psychologicalscience.org/media/releases/2007/inzlicht.cfm
(via Cognews)
  • Inzlicht, M., & Gutsell, J. N. (in press). Running on empty: Neural signals for self-control failure. Psychological Science. (preprint)




The Dictator Game and Radiohead.


As you may know, Radiohead recently announced that they would let fans decide what to pay for its new album, In Rainbows. The situation is thus similar (but not exactly) to a Dictator Game: player A spits a "pie" between her and player B, but B accepts whatever A offers. Thus, contrarily to the Ultimatum Game, B's decisions or reactions has no influence on A's choice behavior. Radiohead fans were thus in a position similar to A's position. If we make the assumption that they framed the situation as a purchasing one in which they choose how much of the CD price they want to split between them and the band, and given that a CD is typically priced £1o (roughly 20 U.S.$), then the fans are choosin how to split 10£ between them and Radiohead. Usually, experimental studies of the Dictator Games shows that 70% of the subjects (A) transfer some amount to Players B, and transfer an average of 24% of the initial endowment (Forsythe et al. (1994). Hence if these results can generalized to the "buy Radiohead album" game, it would suggest that about 70% of those who download the album would pay an average of £2.4 , while 30% would pay nothing. An online survey (by The Times) showed that this prediction is no too far from the truth: a third of the fans paid nothing, and most paid an average of £4.

An internet survey of 3,000 people who downloaded the album found that most paid an average of £4, although there was a hardcore of 67 fans who thought that the record was worth more than £10 and a further 12 who claimed to have paid more than £40.

Radiohead could have earn more money just by using a simple trick: displaying a pair of eyes somewhere on the website. With this simple trick, Bateson et al. dicover that people contribute 3 times more in an honesty box for coffee when there is a pair of eyes than when there is pictures of a flowers (Bateson et al., 2006)

Also, when a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005).

The New York Times has a good piece on fan's motivation to pay, with an interview of George Loewenstein: Radiohead Fans, Guided by Conscience (and Budget).

Related posts

References
  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Forsythe, R., J. L. Horowitz, N. Savin, and M. Sefton, (1994). Fairness in Simple Bargaining Experiments. Games and Economic Behavior, vol. 6(3), 347–369.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Leeds, J. (2007). Radiohead to Let Fans Decide What to Pay for Its New Album. The New York Times.
  • How much is Radiohead’s online album worth? Nothing at all, say a third of fans. The Times.
  • http://www.whatpricedidyouchose.com



A roundup of the most popular posts

According to the stats, the 5 mots popular posts on Natural Rationality are:

  1. Strong reciprocity, altruism and egoism
  2. What is Wrong with the Psychology of Decision-Making?
  3. My brain has a politics of its own: neuropolitic musing on values and signal detection
  4. Rational performance and behavioral ecology
  5. Natural Rationality for Newbies

Enjoy!



10/11/07

Resources on law, neuroscience, and "neurolaw"

From http://lawandneuroscienceproject.org/resources via Neuroethics and Law Blog.

Readings on Law and Neuroscience

Bibliography on Law and Biology

Blog on Neuroethics and Law



10/10/07

Fairness and Schizophrenia in the Ultimatum

For the first time, a study look at schizophrenic patient behavior in the Ultimatum Game. Other studies of schizophrenic choice behavior revealed that they have difficulty in decisions under ambiguity and uncertainty (Lee et al, 2007), have a slight preference for immediate over long-term rewards, (Heerey et al, 2007), exhibit "strategic stiffness" (sticking to a strategy in sequential decision-making without integrating the outcomes of past choices; Kim et al, 2007), perform worse in the Iowa Gambling Task (Sevy et al. 2007)

A research team from Israel run a Ultimatum experiment with schizophrenic subjects (plus two control group, one depressive, one non-clinical). They had to split 20 New Israeli Shekels (NIS) (about 5 US$). Although schizophrenic patients' Responder behavior was not different from control group, their Proposer behavior was different: they tended to be less strategic.

With respect to offer level, results fall into three categories, fair (10 NIS), unfair (less than 10 NIS), and hyper-fair (more than 10 NIS). Schizophrenic patients tended to make less 'unfair' offer, and more 'hyper-fair' offer. Men were more generous than women.

According to the authors,

for schizophrenic Proposers, the possibility of dividing the money evenly was as reasonable as for healthy Proposers, whereas the option of being hyper-fair appears to be as reasonable as being unfair, in contrast to the pattern for healthy Proposers.
Agay et al. also studied the distribution of Proposers types according to their pattern of sequential decisions (how their second offer compared to their first). They identified three types:
  1. "‘Strong-strategic’ Proposers are those who adjusted their 2nd offer according to the response to their 1st offer, that is, raised their 2nd offer after their 1st one was rejected, or lowered their 2nd offer after their 1st offer was accepted.
  2. Weak-strategic’ Proposers are those who perseverated, that is, their 2nd offer was the same as their 1st offer.
  3. Finally, ‘non-strategic’ Proposers are those who unreasonably reduced their offer after a rejection, or raised their offer after an acceptance."
20% of the schizoprenic group are non-strategic, while none of the healthy subjects are non-strategic.


the highest proportion of non-strategic Proposers is in the schizophrenic group
The authors do not offer much explication for these results:

In the present framework, schizophrenic patients seemed to deal with the cognition-emotion conflict described in the fMRI study of Sanfey et al. (2003) [NOTE: the authors of the first neuroeconomics Ultimatum study] in a manner similar to that of healthy controls. However, it is important to note that the low proportion of rejections throughout the whole experiment makes this conclusion questionable.
Another study, however, shows that "siblings of patients with schizophrenia rejected unfair offers more often compared to control participants." (van ’t Wout et al, 2006, chap. 12), thus suggesting that Responder behavior might be, after all, different in patient with a genetic liability to schizophrenia. Yet another unresolved issue !

Related Posts

Reference
  • Agay, N., Kron, S., Carmel, Z., Mendlovic, S., & Levkovitz, Y. Ultimatum bargaining behavior of people affected by schizophrenia. Psychiatry Research, In Press, Corrected Proof.
  • Hamann, J., Cohen, R., Leucht, S., Busch, R., & Kissling, W. (2007). Shared decision making and long-term outcome in schizophrenia treatment. The Journal of clinical psychiatry, 68(7), 992-7.
  • Heerey, E. A., Robinson, B. M., McMahon, R. P., & Gold, J. M. (2007). Delay discounting in schizophrenia. Cognitive neuropsychiatry, 12(3), 213-21.
  • Hyojin Kim, Daeyeol Lee, Shin, Y., & Jeanyung Chey. (2007). Impaired strategic decision making in schizophrenia. Brain Res.
  • Lee, Y., Kim, Y., Seo, E., Park, O., Jeong, S., Kim, S. H., et al. (2007). Dissociation of emotional decision-making from cognitive decision-making in chronic schizophrenia. Psychiatry research, 152(2-3), 113-20.
  • Mascha van ’t Wout, Ahmet Akdeniz, Rene S. Kahn, Andre Aleman. Vulnerability for schizophrenia and goal-directed behavior: the Ultimatum Game in relatives of patients with schizophrenia. (manuscript), from The nature of emotional abnormalities in schizophrenia: Evidence from patients and high-risk individuals / Mascha van 't Wout, 2006, Proefschrift Universiteit Utrecht.
  • McKay, R., Langdon, R., & Coltheart, M. (2007). Jumping to delusions? Paranoia, probabilistic reasoning, and need for closure. Cognitive neuropsychiatry, 12(4), 362-76.
  • Sevy, S., Burdick, K. E., Visweswaraiah, H., Abdelmessih, S., Lukin, M., Yechiam, E., et al. (2007). Iowa Gambling Task in schizophrenia: A review and new data in patients with schizophrenia and co-occurring cannabis use disorders. Schizophrenia Research, 92(1-3), 74-84.



10/9/07

Recent neuroeconomics/neuroethics studies

First, in Cerebral Cortex, a lesion study suggesting that the Ventromedial Prefrontal Cortex (VMF) in involved both in decisions under uncertainty and those that are not; moreover, "Subjects with VMF damage were significantly more inconsistent in their preferences than controls, whereas those with frontal damage that spared the VMF performed normally". In:

The Role of Ventromedial Prefrontal Cortex in Decision Making: Judgment under Uncertainty or Judgment Per Se? Fellows, L. K., & Farah, M. J. (2007). Cerebral Cortex, 17(11), 2669-2674.

Second, a study on norm compliance: subjects are fairer in Dictator games when third-party punishment is possible; Spitzer et al. identified the brain areas involved in this norm-compliant behavior (the lateral orbitofrontal cortex--correlated with Machiavellian personality characteristics--and right dorsolateral prefrontal cortex). In:

The Neural Signature of Social Norm Compliance Manfred Spitzer, Urs Fischbacher, Bärbel Herrnberger, Georg Grön and Ernst Fehr
Volume 56, Issue 1, 4 October 2007, Pages 185-196

See a review in ScienceNOW, and a mini-review on norm violation with a neuro-computational twist:

To Detect and Correct: Norm Violations and Their Enforcement, P. Read Montague and Terry Lohrenz, Neuron, Volume 56, Issue 1, 4 October 2007, Pages 14-18.


Related to that, a suggested reading:

Sripada & Stich analyse the structure of norms in social context and how it relate to cognitive processing.



Minsky's quote on folk-psychology and mechanism:

Here is a clear statement of why folk-psychology is problematic


. . what is a goal, and how can we have one? If you try to answer such questions in everyday words like “a goal is a thing that one wants to achieve,” you will find yourself in circles because, then, you must ask what wanting is–and then you find that you’re trying to describe this in terms of other words like motive, desire, purpose, aim, hope, aspire, yearn, and crave. More generally, you get caught in this trap whenever you try to describe a state of mind in terms of other psychology words because these never lead to talking about the underlying machinery.
- Marvin Minsky, 2006, The Emotion Machine, New York: Simon & Schuster, p. 187



10/5/07

Review Paper on the Ultimatum, Chimps and related stuff

Thanks to Gene Expression, I found a good review paper in The Economist on the Ultimatum, Chimps and related stuff:

Evolution: Patience, fairness and the human condition. The Economist. Retrieved October 5, 2007, from



Ape-onomics: Chimps in the Ultimatum Game and Rationality in the Wild

I recently discussed the experimental study of the Ultimatum Game, and showed that it has been studied in economics, psychology, anthropology, psychophysics and genetics. Now primatologists/evolutionary anthropologists Keith Jensen, Josep Call and Michael Tomasello (the same team that showed that chimpanzees are vengeful but not spiteful see 2007a) had chimpanzees playing the Ultimatum, or more precisely, a mini-ultimatum, where proposers can make only two offers, for instance a fair vs. unfair one, or fair vs. an hyperfair, etc. Chimps had to split grapes. The possibilities were (in x/y pairs, x is the proposer, y, the responder)
  • 8/2 versus 5/5
  • 8/2 versus 2/8
  • 8/2 versus 8/2 (no choice)
  • 8/2 versus 10/0

The experimenters used the following device:



Fig. 1. (from Jensen et al, 2007b) Illustration of the testing environment. The proposer, who makes the first choice, sits to the responder's left. The apparatus, which has two sliding trays connected by a single rope, is outside of the cages. (A) By first sliding a Plexiglas panel (not shown) to access one rope end and by then pulling it, the proposer draws one of the baited trays halfway toward the two subjects. (B) The responder can then pull the attached rod, now within reach, to bring the proposed food tray to the cage mesh so that (C) both subjects can eat from their respective food dishes (clearly separated by a translucent divider)

Results indicate the chimps behave like Homo Economicus:
responders did not reject unfair offers when the proposer had the option of making a fair offer; they accepted almost all nonzero offers; and they reliably rejected only offers of zero (Jensen et al.)


As the authors conclude, "one of humans' closest living relatives behaves according to traditional economic models of self-interest, unlike humans, and t(...) does not share the human sensitivity to fairness."

So Homo Economicus would be a better picture of nature, red in tooth and claw? Yes and no. In another recent paper, Brosnan et al. studied the endowment effect in chimpanzees. The endowment effect is a bias that make us placing a higher value on objects we own relative to objects we do not. Well, chimps do that too. While they usually are indifferent between peanut butter and juice, once they "were given or ‘endowed’ with the peanut butter, almost 80 percent of them chose to keep the peanut butter, rather than exchange it for a juice bar" (from Vanderbilt news). They do not, however, have loss-aversion for non-food goods (rubber-bone dog chew toy and a knotted-rope dog toy). Another related study (Chen et al, 2006) also indicates that capuchin monkeys exhibit loss-aversion.

So there seems to be an incoherence here: chimps are both economically and non-economically rational. But this is only, as the positivists used to say, a pseudo-problem: they tend to comply with standard or 'selfish' economics in social context, but not in individual context. The difference between us and them is truly that we are, by nature, political animals. Our social rationality requires reciprocity, negotiation, exchange, communication, fairness, cooperation, morality, etc., not plain selfishness. Chimps do cooperate and exhibit a slight taste for fairness (see section 1 of this post), but not in human proportions.


Related post:

Reference:



10/4/07

A distributed conception of decision-making

In a previous post, I suggested that there is something wrong with the standard (“cogitative”) conception of decision-making in psychology. In this post, I would like to outline an alternative conception, what we might call the “distributed conception”.

A close look at robotics suggests that decision-making should not be construed as a deliberative process. Deliberative control (Mataric, 1997) or sense-model-plan-act (SMPA) architectures have been unsuccessful in controlling autonomous robots (Brooks, 1999; Pfeifer & Scheier, 1999). In these architectures, (e.g. Nilsson, 1984), “what to do?” was represented as a logical problem. Sensors or cameras represented the perceptible environment while internal processors converted sensory inputs in first-order predicate calculus. From this explicit model of its environment, the robot’s central planner transformed a symbolic description of the world into a sequence of actions (see Hu & Brady, 1996, for a survey). Decision-making was taken in charge by an expert system or a similar device. Thus the flow of information is one-way only: sensors → model → planner → effectors.

SMPA architectures could be effective, but only in environment carefully designed for the robot. The colors, lightning and objects disposition were optimally configured for simplifying perception and movement. Brooks describes how the rooms where autonomous robots evolve were optimally configured:

The walls were of a uniform color and carefully lighted, with dark rubber baseboards, making clear boundaries with the lighter colored floor. (…) The blocks and wedges were painted different colors on different planar surfaces. (….) Blocks and wedges were relatively rare in the environment, eliminating problems due to partial obscurations (Brooks, 1999, p. 62)

Thus the cogitative conception of decision-making, and its SMPA implementations, had to be abandoned. If it did not work for mobile robots, it is justified to argue that for cognitive agents in general the cogitative conception also has to be abandoned. Agents do not make decisions simply by central planning and explicit models manipulations, but by coordinating multiple sensorimotor mechanisms. In order to design robots able to imitate people, for instance, roboticists build systems that control their behavior through multiple partial models. Mataric (2002) robots, for instance, learn to imitate by coordinating the following modules:

  1. a selective attentional mechanisms that extract salient visual information (other agent's face, for instance)
  2. a sensorimotor mapping system that transforms visual input in motor program
  3. a repertoire of motor primitives
  4. a classification-based learning mechanism that learns from visuo-motor mappings

Neuroeconomics also suggests another--similar--avenue: there is no brain area, circuit or mechanisms specialized in decision-making, but rather a collection of neural modules. Certain area specializes in visual-saccadic decision-making (Platt & Glimcher, 1999). Social neuroeconomics indicates that decision in experimental games are mainly affective computations: choice behavior in these games is reliabely correlated to neural activations of social emotions such as the ‘warm glow’ of cooperation (Rilling et al., 2002), the ‘sweet taste’ of revenge (de Quervain et al., 2004) or the ‘moral disgust’ of unfairness (Sanfey et al., 2003). Subjects without affective experiences or affective anticipations are unable to make rational decision, as Damasio and his colleagues discovered. Damasio found that subjects with lesions in the ventromedial prefrontal cortex (vmPFC, a brain area above the eye sockets) had huge problems in coping with everyday tasks (Damasio, 1994). They were unable to plan meetings; they lose their money, family or social status. They were, however, completely functional in reasoning or problem-solving task. Moreover, Damasio and its collaborators found that these subjects had lower affective reactions. They did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. The researchers concluded that these subjects were unable to use emotions to aid in decision-making, a hypothesis that also implies that in normal subjects, emotions do aid in decision-making.

Consequently, the “Distributed Conception of Decision-making” suggest that making is:

Sensorimotor: the mechanisms for decision-making are not only and not necessarily intellectual, high-level and explicit. Decision-making is the whole organism’s sensorimotor control.
Situated: a decision is not a step-by-step internal computation, but also a continuous and dynamic adjustment between the agent and its environment that develop in the whole lifespan. Decision-making is always physically and (most of the time) socially situated: ecological situatedness is both a constraint on, and a set of informational resources that helps agent to cope with, decision-making.
Psychology should do more than documenting our inability to follow Bayesian reasoning in paper-and-pen experiment, but study our sensorimotor situated control capacities. Decision-making should not be a secondary topics for psychology but, following Gintis “the central organizing principle of psychology” (Gintis, 2007, p. 1). Decision-making is more than an activity we consciously engage in occasionally : it is rather the very condition of existence (as Herrnstein, said “all behaviour is choice” (Herrnstein, 1961).

Therefore, deciding should not be studied like a separate topic (e.g. perception), or an occasional activity (e.g. chess-playing) or a high-level competence (e.g. logical inference), but with robotic control. A complete, explicit model of the environment, manipulated by a central planner, is not useful for robots. New Robotics (Brooks, 1999) revealed that effective and efficient decision-making is achieved through multiple partial models updated in real-time. There is no need to integrate models in a unified representations or a common code: distributed architectures, were many processes runs in parallel, achieve better results. As Barsalou et al. (2007) argue, Cognition is coordinated non-cognition; similarly, decision-making is coordinated non-decision-making.

If decision-making is the central organizing principle of psychology, all the branches of psychology could be understood as research fields that investigate different aspects of decision-making. Abnormal psychology explains how deficient mechanisms impair decision-making. Behavioral psychology focuses on choice behavior and behavioral regularities. Cognitive psychology describes the mechanisms of valuation, goal representation, preferences and how they contribute to decision-making. Comparative psychology analyzes the variations in neural, behavioral and cognitive processes among different clades. Developmental psychology establishes the evolution of decision-making mechanisms in the lifespan. Neuropsychology identify the neural substrates of these mechanisms. Personality psychology explains interindividual variations in decision-making, our various decision-making “profiles”. Social psychology can shed lights on social decision-making, that is, either collective decision-making (when groups or institutions make decisions) or individual decision-making in social context. Finally, we could also add environmental psychology (how agents use their environment to simplify their decisions) and evolutionary psychology (how decision-making mechanisms are – or are not – adaptations).

Related posts:


References
  • Barsalou, Breazeal, & Smith. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8(2), 79-91.
  • Brooks, R. A. (1999). Cambrian Intelligence : The Early History of the New Ai. Cambridge, Mass.: MIT Press.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Damasio, A. R. (1994). Descartes' Error : Emotion, Reason, and the Human Brain. New York: Putnam.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., & Fehr, E. (2004). The Neural Basis of Altruistic Punishment. Science, 305(5688), 1254-1258.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Herrnstein, R. J. (1961). Relative and Absolute Strength of Response as a Function of Frequency of Reinforcement. J Exp Anal Behav., 4(4), 267–272.
  • Hu, H., & Brady, M. (1996). A Parallel Processing Architecture for Sensor-Based Control of Intelligent Mobile Robots. Robotics and Autonomous Systems, 17(4), 235-257.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Mataric, M. J. (1997). Behaviour-Based Control: Examples from Navigation, Learning, and Group Behaviour. Journal of Experimental & Theoretical Artificial Intelligence, 9(2 - 3), 323-336.
  • Mataric, M. J. (2002). Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. Imitation in Animals and Artifacts, 391–422.
  • Nilsson, N. J. (1984). Shakey the Robot: SRI International.
  • Pfeifer, R., & Scheier, C. (1999). Understanding Intelligence. Cambridge, Mass.: MIT Press.
  • Platt, M. L., & Glimcher, P. W. (1999). Neural Correlates of Decision Variables in Parietal Cortex. Nature, 400(6741), 238.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.



Social Neuroeconomics: A Review by Fehr and Camerer

Ernst Fehr and Colin Camerer, two prominent experimental/behavioral/neuro-economists published a new paper in Trends in Cognitive Science on social neuroeconomics. Discussing many studies (this paper is a state-of-the-art review), they conclude that

social reward activates circuitry that overlaps, to a surprising degree, with circuitry that anticipates and represents other types of rewards. These studies reinforce the idea that social preferences for donating money, rejecting unfair offers, trusting others and punishing those who violate norms, are genuine expressions of preference

The authors illustrate this overlap with a the following picture: social and non-social reward elicit similar neural activation (see references for all cited studies at the end of this post):



Figure 1. (from Fehr and Camerer, forthcoming). Parallelism of rewards for oneself and for others: Brain areas commonly activated in (a) nine studies of social reward (..), and (b) a sample of six studies of learning and anticipated own monetary reward (..).

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents's strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.


Related posts:

Reference:
  • Fehr, E. and Camerer, C.F., Social neuroeconomics: the neural circuitry of social preferences, Trends Cogn. Sci. (2007), doi:10.1016/j.tics.2007.09.002


Studies of social reward cited in Fig. 1:

  • [26] J. Rilling et al., A neural basis for social cooperation, Neuron 35 (2002), pp. 395–405.
  • [27] J.K. Rilling et al., Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways, Neuroreport 15 (2004), pp. 2539–2543.
  • [28] D.J. de Quervain et al., The neural basis of altruistic punishment, Science 305 (2004), pp. 1254–1258.
  • [29] T. Singer et al., Empathic neural responses are modulated by the perceived fairness of others, Nature 439 (2006), pp. 466–469
  • [30] J. Moll et al., Human fronto-mesolimbic networks guide decisions about charitable donation, Proc. Natl. Acad. Sci. U. S. A. 103 (2006), pp. 15623–15628.
  • [31] W.T. Harbaugh et al., Neural responses to taxation and voluntary giving reveal motives for charitable donations, Science 316 (2007), pp. 1622–1625.
  • [32] Tabibnia, G. et al. The sunny side of fairness – preference for fairness activates reward circuitry. Psychol. Sci. (in press).
  • [55] T. Singer et al., Brain responses to the acquired moral status of faces, Neuron 41 (2004), pp. 653–662.
  • [56] B. King-Casas et al., Getting to know you: reputation and trust in a two-person economic exchange, Science 308 (2005), pp. 78–83.

Studies of learning and anticipated own monetary reward cited in Fig. 1:

  • [33] S.M. Tom et al., The neural basis of loss aversion in decision-making under risk, Science 315 (2007), pp. 515–518.
  • [61] M. Bhatt and C.F. Camerer, Self-referential thinking and equilibrium as states of mind in games: fMRI evidence, Games Econ. Behav. 52 (2005), pp. 424–459.
  • [73] P.K. Preuschoff et al., Neural differentiation of expected reward and risk in human subcortical structures, Neuron 51 (2006), pp. 381–390.
  • [74] J. O’Doherty et al., Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science 304 (2004), pp. 452–454.
  • [75] E.M. Tricomi et al., Modulation of caudate activity by action contingency, Neuron 41 (2004), pp. 281–292.



10/3/07

What is Wrong with the Psychology of Decision-Making?

In psychology textbooks, decision-making figures most of the time in the last chapters. These chapters usually acknowledge the failure of the Homo economicus model and propose to understand human irrationality as the product of heuristics and biases, that may be rational under certain environmental conditions. In a recent article, H.A. Gintis documents this neglect:

(…) a widely used text of graduate- level readings in cognitive psychology, (Sternberg & Wagner, 1999) devotes the ninth of eleven chapters to "Reasoning, Judgment, and Decision Making," offering two papers, the first of which shows that human subjects generally fail simple logical inference tasks, and the second shows that human subjects are irrationally swayed by the way a problem is verbally "framed" by the experimenter. A leading undergraduate cognitive psychology text (Goldstein, 2005) placed "Reasoning and Decision Making" the last of twelve chapters. This includes one paragraph describing the rational actor model, followed by many pages purporting to explain why it is wrong. (…) in a leading behavioral psychology text (Mazur, 2002), choice is covered in the last of fourteen chapters, and is limited to a review of the literature on choice between concurrent reinforcement schedules and the capacity to defer gratification (Gintis, 2007, pp. 1-2)
Why? The standard conception of decision-making in psychology can be summarized by two claims, one conceptual, one empirical. Conceptually, the standard conception holds that decision-making is a separate topic: it is one of the subjects that psychologists may study, together with categorization, inference, perception, emotion, personality, etc. As Gintis showed, decision-making has its own chapters (usually the lasts) in psychology textbooks. On the empirical side, the standard conception construes decision-making is an explicit deliberative process, such as reasoning. For instance, in a special edition of Cognition on decision-making (volume 49, issues 1-2, Pages 1-187), one finds the following claims:

Reasoning and decision making are high-level cognitive skills […]
(Johnson-Laird & Shafir, 1993, p. 1)

Decisions . . . are often reached by focusing on reasons that justify the selection of one option over another

(Shafir et al., 1993, p. 34)

Hence decision-making is studied mostly by multiple-choice tests using the traditional paper and pen method, which clearly suggests that deciding is considered as an explicit process. Psychological research thus assumes that the subjects’ competence in probabilistic reasoning as revealed by these tests is a good description of their decision-making capacities.

These two claims, however, are not unrelated. Since decision-making is a central, high-level faculty that stands between perception and action, it can be studied in isolation. They constitute a coherent whole, something that philosophers of science would call a paradigm. This paradigm is built around a particular view of decision-making (and more generally, cognition) that could be called “cogitative”:

Perception is commonly cast as a process by which we receive information from the world. Cognition then comprises intelligent processes defined over some inner rendition of such information. Intentional action is glossed as the carrying out of commands that constitute the output of a cogitative, central system. (Clark, 1997, p. 51)


In another post, I'll present an alternative to the Cogitative conception, based on research in neuroeconomics, robotics and biology.

You can find Gintis's article on his page, together with other great papers.

References

  • Clark, A. (1997). Being There : Putting Brain, Body, and World Together Again. Cambridge, Mass.: MIT Press.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Goldstein, E. B. (2005). Cognitive Psychology : Connecting Mind, Research, and Everyday Experience. Australia Belmont, CA: Thomson/Wadsworth.
  • Johnson-Laird, P. N., & Shafir, E. (1993). The Interaction between Reasoning and Decision Making: An Introduction. Cognition, 49(1-2), 1-9.
  • Mazur, J. E. (2002). Learning and Behavior (5th ed.). Upper Saddle River, N.J.: Prentice Hall.
  • Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-Based Choice. Cognition, 49(1-2), 11-36.
  • Sternberg, R. J., & Wagner, R. K. (1999). Readings in Cognitive Psychology. Fort Worth, TX: Harcourt Brace College Publishers.



10/2/07

de Waal on Altruism and Empathy

Another great paper in the 2008 Annual Review of Pyschology:

Putting the Altruism Back into Altruism: The Evolution of Empathy

Evolutionary theory postulates that altruistic behavior evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to anothera€™s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the othera€™s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory.

See also, in In-Mind, a new online magazine about social cognition: