Natural Rationality | decision-making in the economy of nature

7/31/07

Understanding two models of fairness: outcome-based inequity aversion vs. intention-based reciprocity

Why are people fair? Theoretical economics provides two generic models that fits the data. According to the first, inequity aversion, people are inequity-averse: they don't like a situation where one agent is disadvantaged over another. This model is based on consequences. The other model is based on intentions: although the consequences of an action are important, what matters here is the intention that motivates the action. I won't discuss which approach is better (it is an ongoing debates in economics), but I just wanted to share with a extremely clear presentations of the two parties, found in van Winden, F. (2007). Affect and Fairness in Economics. Social Justice Research, 20(1), 35-52., on pages 38-39:

In inequity aversion models (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999), which focus on the outcomes or payoffs of social interactions, any deviation between an individual's payoff and the equitable payoff (e.g., the mean payoff or the opponent's payoff) is supposed to be negatively valued by that individual. More formally, the crucial difference between an outcome-based inequity aversion model and the homo economicus model is that, in addition to the argument representing the individual's own payoff, a new argument is inserted in the utility function showing the individual's inequity aversion (social preferences), as in the social utility model (see, e.g., Handgraaf et al., 2003; Loewenstein et al., 1989; Messick and Sentis, 1985). The individual is then assumed to maximize this adapted utility function.

In intention-based reciprocity models it is not the outcomes of the interaction as such that matter, but the intentions of the players (Rabin, 1993; see also Falk and Fischbacher, 2006). The idea is that people want to reciprocate perceived (un)kindness with (un)kindness, because this increases their utility. Obviously, beliefs play a crucial role here. More formally, in this case, in addition to an individual's own payoff a new argument is inserted in the utility function incorporating the assumed reciprocity motive. As a consequence, if someone is perceived as being kind it increases the individual's utility to reciprocate with being kind to this other person. Similarly, if the other is believed to be unkind, the individual is better off by being unkind as well, because this adds to her or his utility. Again, this adapted utility function is assumed to be maximized by the individual.





7/30/07

A glimpse at the evolution of the fearing and trusting brain

Together with other mechanisms, the amygdala is involveld in a complex neural circuitry that transforms photons hitting your eyes into the feeling that "Mom is mad at me because I break her favorite vase". Often referred to as the fear center, the amygdala is more like an online supervisory system that sets levels of alert. Many of its activities are of a social nature. Explicit and implicit distrust of faces elicits amygdala activation (Winston et al., 2002), while trust is increased with amygdala impairment (Adolphs et al., 1998). Moreover, the trust enhancing effect of oxytocin is mediated by amygdalar modulation: oxytocin reduces fear and hence allows trusting. In a nutshell, emotional memorization, learning and modulation performed by the amygdala obeys the following flowchart:


(from Schumann 1998)

A subpart of the amygdala, the lateral nucleus, processes information about social stimuli (such as facial expression). Autistic individuals tend to have impaired lateral nucleus, which makes sense if this nucleus is an important social-cognitive device (autistic subjects perform poorly in task that involves mental states attribution or other social inferences).According to Emery and Amaral (2000), inputs form the visual neocortex cortex enters the amygdala through the lateral nucleus, where its "emotional meaning" is attributed (I know, it is simplification...); the basal nucleus adds information about the social context. Hence this nucleus acts as a sensory integrator (LeDoux, 2000).

In a new paper in American Journal of Physical Anthropology, Barger et al. studied the relative size of different nuclei of the amygdala in different primates (humans, chimpanzee, bonobo, gorilla, etc.). The study revealed that the human lateral nucleus represents a larger fraction of the amygdala:






The authors conclude:

The large size of the human L [lateral nuclei] may reflect the proliferation of the temporal lobe over the course of hominid evolution, while the inverse may be true of the gorilla. The smaller size of the orangutan AC [amygdaloid complex] and BLD [Baso-lateral division] may be related to the diminished importance of interconnected limbic structures in this species. Further, there is some evidence that the orangutan, which exhibits one of the smallest group sizes on the contin- uum of primate sociality, may also be distinguished neuroanatomically from the other great apes, suggesting that social pressures may play a role in the development of the AC in association with other limbic regions.
Living in large groups thus may have shaped the evolution of emotional processing capacities of our brains. In the economy of nature, negotiating our way in a complex social world requires accute and specialized cognitive capacities in order to cooperate, trust, reciprocate, etc. This research show the potentials of evolutionary cognitive neuroscience (see this post).


References


  • Adolphs R, Tranel D, Damasio AR (1998) The human amygdala in social judgment. Nature 393: 470-474.
  • Barger, N. Stefanacci, L., & Semendeferi, K. (2007) A comparative volumetric analysis of the amygdaloid complex and basolateral division in the human and ape brain. American Journal of Physical Anthropology.
  • Emery and Amaral, 2000 N.J. Emery and D.G. Amaral, The role of amygdala in primate social cognition. In: R.D. Lane and L. Nadel, Editors, Cognitive Neuroscience of Emotion, Oxford Univ. Press, New York (2000), pp. 156–191.
  • LeDoux JE (2000) Emotion circuits in the brain. Annu Rev Neurosci 23: 155-184
  • Schumann, J.A. 1998. Language Learning. Vol. 48 Issue s1 Page ix-326
  • Winslow JT, Insel TR (2004) Neuroendocrine basis of social recognition. Curr Opin Neurobiol 14: 248-253



7/27/07

The moral stance: a brief introduction to the Knobe effect and similar phenomena

An important discovery in the new field of Experimental Philosophy (or "x-phi", i.e., "using experimental methods to figure out what people really think about particular hypothetical cases" -Online dictionary of philosophy of mind) is the importance of moral beliefs in intentional action attribution. Contrast these two cases:

[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
A and B are identical, except that in one case the program harms the environment and in the other case it helps it. Subjects were asked whether the chairman of the board intentionally harm (A) or help (B) the environment. Since the two cases have the same belief-desire structure, both actions should be seen as intentional, whether it is right or wrong. It turns out that in the "harm" version, most people (82%) say that the chairman intentionally harm the environment; in the "help" version, only 23% say that the chairman intentionally help the environment. This effect is call the "Knobe effect", because it was discovered by philosopher Joshua Knobe. In a nutshell it means that

people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".

A related post on the x-phi blog by Sven Nyholm describes a similar experiment. The focus was not intention, but happiness. The two versions were

[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Subjects were asked whether they agreed or disagreed with the sentence "Richard is happy" (on a scale from 1=disagree to 7=agree). Subjects slightly agrees (4.6/7) in the morally good condition (A) whereas they slightly disagrees (3.5/7) in the morally bad condition (B), and the difference is statistically significant. Again, the concept of "being happy" is partly moral-normative.

A related phenomena has been observed in an experimental study of generosity recently published: generous behavior is also influenced by moral-normative beliefs (Fong, 2007). In this experiment, donors had to decide how much of a $10 dollars "pie" they want to transfer to a real-life welfare recipient (and keep the rest: thus it is a Dictator game). They read information about the recipients (who had to fill a questionnaire before). They we asked about their age, race, gender, etc. The three recipients had a similar profile, except for their motivation to work. In the last three questions:

  • If you don't work full-time, are you looking for more work? ______Yes, I am looking for more work. ______No, I am not looking for more work.
  • If it were up to you, would you like to work full-time? ______Yes, I would like to work full-time. ______No, I would not like to work full-time.
  • During the last five years, have you held one job for more than a one-year period? Yes_____ No_____
one replied Yes to all ("industrious"), one replied No ("lazy"), and the other did not reply ("low-information"). Donors made their decision and money was transferred for real (btw, that's one thing I like about experimental economics: there is no deceptions, no as-if: real people receive real money). Results:

Lazy-recipient, low-information-recipient, and industrious-recipient received an average of $1.84, $3.21, and $2.79, respectively. The ant and the grasshopper! (" You sang! I'm at ease/
For it's plain at a glance/Now, ma'am, you must dance."). As the author says:

Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers

[In another research paper Pr. Fong also found different biases in giving to Katrina victims.]

An interesting--and surprising--finding of this study is also that this "ant effect" ("you should deserve help") was stronger in people who scored higher of humanitarianism beliefs. They don't give more than others when recipients are deemed to be poor because of laziness (another reason not to trust what people say about themselves, and look at their behavior instead). Again, a strong moral-normative effect on beliefs and behavior. Since oxytocin increases generosity (see this post) and this effect is due to a greater empathy induced by oxytocin, I am curious to see if people in the lazy vs. industrious experiment after oxytocin inhalation would become more sensible to the origin of poverty (bad luck or lazyness). If bad luck inspires more empathy, then I guess yes.

Man the moral animal?

Morality seems to be deeply entrenched in our social-cognitive mechanims. One way to understand all these results is to posit that we routinely and usually interpret each other from the "moral stance", not the intentional one. The "intentional stance", as every Philosophy of mind 101 course teaches us, is the perspective we adopt when we deal with intentional agents (agents who entertain beliefs and desires). We explain and predict action based on their rationality and the mental representations they should have, given the circumstances. In other words, it's the basic toolkit for being a game-theoretic agent. Philosophers (Dennett in particular) contrast this stance with the physical and the design stance (when we talk about an apple that falls or the function of the "Ctrl" key in a computer, for instance). I think we should introduce a related stance, the moral stance. Maybe--but research will tell us--this stance is more basic. We switch to the purely intentional stance when, for instance, we interact with computer in experimental games. Remember how subjects don't care about being cheated by a computer in the Ultimatum Game: they have no aversive feeling (i.e. insular activation) when the computer make an unfair offer (see a discussion in this paper by Sardjevéladzé and Machery). Hence they don't use the "moral stance", but they still use the intentional stance. Another possibility is that the moral stance might explains why people deviate from standard game-theoretical predictions: all these predictions are based on intentional-stance functionalism. This stance applies more to animals, psychopaths or machines than to normal human beings. And also to groups: in many games, such as the Ultimatum of the Centipede, groups behave more "rationally" than individuals (see Bornstein et al., 2004; Bornstein & Yaniv, 1998; Cox & Hayne, 2006), that is, they are closer to game-theoretic behavior (a point radically develop in the movie The Corporation: firms lack moral qualities). Hence the moral stance may have particular requirements (individuality, emotions, empathy, etc.).


References:
Links:



7/26/07

Special issues of NYAS on biological decision-making

The may issue of the Annals of the New York Academy of Sciences is devoted to Reward and Decision Making in Corticobasal Ganglia Networks. Many big names in decision neuroscience (Berns, Knutson, Delgado, etc.) contributed.


Introduction. Current Trends in Decision Making
Bernard W Balleine, Kenji Doya, John O'Doherty, Masamichi Sakagami

Learning about Multiple Attributes of Reward in Pavlovian Conditioning
ANDREW R DELAMATER, STEPHEN OAKESHOTT

Should I Stay or Should I Go?. Transformation of Time-Discounted Rewards in Orbitofrontal Cortex and Associated Brain Circuits
MATTHEW R ROESCH, DONNA J CALU, KATHRYN A BURKE, GEOFFREY SCHOENBAUM

Model-Based fMRI and Its Application to Reward Learning and Decision Making
JOHN P O'DOHERTY, ALAN HAMPTON, HACKJIN KIM

Splitting the Difference. How Does the Brain Code Reward Episodes?
BRIAN KNUTSON, G. ELLIOTT WIMMER

Reward-Related Responses in the Human Striatum
MAURICIO R DELGADO

Integration of Cognitive and Motivational Information in the Primate Lateral Prefrontal Cortex
MASAMICHI SAKAGAMI, MASATAKA WATANABE

Mechanisms of Reinforcement Learning and Decision Making in the Primate Dorsolateral Prefrontal Cortex
DAEYEOL LEE, HYOJUNG SEO


Resisting the Power of Temptations. The Right Prefrontal Cortex and Self-Control
DARIA KNOCH, ERNST FEHR

Adding Prediction Risk to the Theory of Reward Learning
KERSTIN PREUSCHOFF, PETER BOSSAERTS

Still at the Choice-Point. Action Selection and Initiation in Instrumental Conditioning
BERNARD W BALLEINE, SEAN B OSTLUND

Plastic Corticostriatal Circuits for Action Learning. What's Dopamine Got to Do with It?
RUI M COSTA

Striatal Contributions to Reward and Decision Making. Making Sense of Regional Variations in a Reiterated Processing Matrix
JEFFERY R WICKENS, CHRISTOPHER S BUDD, BRIAN I HYLAND, GORDON W ARBUTHNOTT

Multiple Representations of Belief States and Action Values in Corticobasal Ganglia Loops
KAZUYUKI SAMEJIMA, KENJI DOYA

Basal Ganglia Mechanisms of Reward-Oriented Eye Movement
OKIHIDE HIKOSAKA

Contextual Control of Choice Performance. Behavioral, Neurobiological, and Neurochemical Influences
JOSEPHINE E HADDON, SIMON KILLCROSS

A "Good Parent" Function of Dopamine. Transient Modulation of Learning and Performance during Early Stages of Training
JON C HORVITZ, WON YUNG CHOI, CECILE MORVAN, YANIV EYNY, PETER D BALSAM

Serotonin and the Evaluation of Future Rewards. Theory, Experiments, and Possible Neural Mechanisms
NICOLAS SCHWEIGHOFER, SAORI C TANAKA, KENJI DOYA

Receptor Theory and Biological Constraints on Value
GREGORY S BERNS, C. MONICA CAPRA, CHARLES NOUSSAIR

Reward Prediction Error Computation in the Pedunculopontine Tegmental Nucleus Neurons
YASUSHI KOBAYASHI, KEN-ICHI OKADA

A Computational Model of Craving and Obsession
A. DAVID REDISH, ADAM JOHNSON

Calculating the Cost of Acting in Frontal Cortex
MARK E WALTON, PETER H RUDEBECK, DAVID M BANNERMAN, MATTHEW F. S RUSHWORTH

Cost, Benefit, Tonic, Phasic. What Do Response Rates Tell Us about Dopamine and Motivation?
YAEL NIV



7/25/07

More than Trust: Oxytocin Increases Generosity

It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo).
They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.


Interestingly, OT subjects did not turn into pure altruist: they make offers (mean $3.77) in the Dictator game similar to placebo subjects (mean $3.58, no significant difference). Thus the motive is neither direct nor indirect reciprocity (Ultimatum were blinded one-shot so there is no tit-for-tat or reputation involved here). It is not pure altruism, according to Stanton et al., (or "strong reciprocity"--see this post on the distinction between types of reciprocity) because the threat of the MinAccept compels players to make fair offers. They conclude that generosity in enhanced because OT affects empathy. Subjects simulate the perspective of the other player in the Ultimatum, but not in the Dictator. Hence, generosity "runs" on empathy: in empathizing context (Ultimatum) subjects are more generous, but in non-empathizing context they don't--in the dictator, it is not necessary to know the opponent's strategy in order to compute the optimal move, since her actions has no impact on the proposer's behavior. It would be interesting to see if there is a different OT effect in basic vs. reenactive empathy (sensorimotor vs. deliberative empathy; see this post).

Interested readers should also read Neural Substrates of Decision-Making in Economic Games, by one of the author of the study (Stanton): in her PhD Thesis, she desribes many neurpeconomic experiences.

[Anecdote: I once asked people of the ESA why they call their society like that: all presented papers were experimental, so I thought that the name should reflect the empirical nature of the conference. They replied judiscioulsy : "Because we think that it's how economics should be done"...]

References



The Journal of Neuroeconomics

Discovery: the Society for Neuroeconomics will launch a journal in 2009, aptly named The Journal of Neuroeconomics:

The Journal of Neuroeconomics is the premier publication venue for presenting scholarly research in Neuroeconomics. An official publication of the Society for Neuroeconomics, it reaches an interdisciplinary target audience of Neurobiologists, Economists and Psychologists. The journal publishes commissioned review articles and original research articles which pass a rigorous peer-review process. The journal also publishes the abstracts of the Society's annual meeting.

Initially, the journal will be published quarterly beginning in 2009 and shortly before that time an Editor-In-Chief will be named by the President of the Society. The Journal will distributed at reduced cost to members of the Society and available by subscription.

If you're interested in receiving subscription information, please fill out the following information: (cf. form)

The 2007 meeting of the Society will be hold in Hull, Mass., this fall. Info here.



7/24/07

The development of loss-aversion

An agent is loss-averse if the absolute value of loosing X (say, $100) is higher than the absolute value of gaining X: if loosing $100 "hurts more" than receiving it feels good. This bias is a robust finding in psychology. A new paper in Developmental Science indicates that loss-aversion unfolds, in the lifetime in three different stages. Children, adolescent and adults display, in the Iowa Gambling Task, different patterns that suggest a developmental continuum in loss-aversion:

  • (a) guessing with a slight tendency to consider frequency of loss to
  • (b) focusing on frequency of loss, to
  • (c) considering both frequency and amount of probabilistic loss.
Hence we all start with a sensitivity to losses (but only to their frequency), and when we are equipped with more complex cognitive aptitudes we pay attention to the value of losses. According to Huizenga et al, the development of proportional reasoning explains the increased complexity of loss-aversion.



The linguistic basis of social preferences

Surprising (at least to me) finding, published in PNAS today. Young infants display a strong preference for agents that speak their own language. More than smell, look, and sound, social attachment seems to be mediated by linguistic (and accent) similarity. We like those who speak like us. From this findings, researchers draw three conclusions:

First, language provides a cue to social preferences, even in infants who have not begun to produce or understand speech. Second, the tendency to favor otherwise unfamiliar members of one's own social group begins to emerge early in human life and well before children begin to learn about the nature and history of social-group conflicts. The passage from infants' social preferences to adults' social conflicts may be long and circuitous, but such a path may exist and may explain, in part, why conflicts among different language and social groups are pervasive and difficult to eradicate. Third, because human languages vary, and the native language must be learned, the tendency to make social distinctions is shaped by experience. Because language learning is especially adaptable early in development, social preferences also may be malleable at young ages. This early adaptability of preference formation for familiar characteristics of individuals may obtain for many potential indicators of social group membership.



7/23/07

Ten major ideas and findings in behavioral decision research in the last 50 years

  1. judgment can be modeled
  2. bounded rationality
  3. to understand decision making, understanding tasks is more important than understanding people
  4. levels of aspiration or reference points and loss aversion
  5. heuristic rules
  6. adding and the importance of simple models
  7. the search for confirmation
  8. the evasive nature of risk perception
  9. the construction of preference
  10. the roles of emotions, affect, and intuition.

from:



The selective impairment of prosocial sentiments and the moral brain


Philosophers often describes the history of philosophy as a dispute between Plato (read: idealism/rationalism) and Aristotle (read:materialism/empiricism). It is of course extremely reductionist since many conceptual and empirical issues where not addressed in Ancient Greece, but there is a non-trivial interpretation of the history of thought according to which controversies often involves these two positions. In moral philosophy and moral psychology, however, the big figures are Hume and Kant. Is morality based on passions (Hume) or reasons (Kant)? This is another simplification, but again it frames the debate. In the last issue of Trends in Cognitive Science(TICS), three papers discusses the reason/emotions debate but provides more acute models.

Recently (see this previous post), Koenig and other collaborators (2007b) explored the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning and showed that they tend to rely a little more on a 'utilitarian' scheme (cost/benefit), and less on a deontological scheme (moral do's and don'ts ), thus suggesting that emotions are involved in moral deontological judgement. These patients, however, were also more emotional in the Ultimatum game, and rejected more offers than normal subjects. So are they emotional or not? In the first TICS paper, Moll and de Oliveira-Souza review the Koenig et al. (2007a) experiment and argue that neither somatic markers nor dual-process theory explains these findings. They propose that a selective impairment of prosocial sentiments explains why the same patient are both less emotional in moral dilemma but more emotional in economic bargaining. These patients can feel less compassion but still feel anger. In a second paper, Greene (author of the research on the trolley problems, see his homepage) challenge this interpretation and put forward his dual-process view (reason-emotion interaction). Moll and de Oliveira-Souza reply in the third paper. As you can see, there is still a debate between Kant and Hume, but cognitive neuroscience provides new tools for both sides of the debates, and maybe even a blurring of these opposites.


References



7/19/07

Beautiful picture of brain areas involved in decision-making

Found yesterday, in a paper by Sanfey (nice review paper, by the way):




"Fig. 2. Map of brain areas commonly found to be activated in decision-making studies. The sagittal section (A) shows the location of the anterior cingulate cortex (ACC), medial prefrontal cortex (MPFC), orbitofrontal cortex (OFC), nucleus accumbens (NA), and substantia nigra (SN). The lateral view (B) shows the location of the dorsolateral prefrontal cortex (DLPFC) and lateral intraparietal area (LIP). The axial section (C; cut along the white line in A and B) shows the location of the insula (INS) and basal ganglia (BG)."
from:



The Top 10 Most Important Papers in Neuroeconomics

The choice wasn't easy, and I may be influenced by my research interests, but here is what I think are the most important papers in the field:

Written by famous behavioral economists, this extensive review paper suggests how economics can be theoretically and empirically informed by neuroeconomics.
An analysis of the theoretical relationship between biology, economics, neuroscience and psychology.
Famous paper showing that unfair offers elicit activity in the anterior insula, an area associated with disgust (but not when they interact with a computer).
  • Zak, P. J. (2004). Neuroeconomics. Philos Trans R Soc Lond B Biol Sci, 359(1451), 1737-1748.
A review paper that provides a complete introduction to neuroscience (methods, brain functions, etc.) and neuroeconomics.
Subjects who received oxytocin via nasal spray are more trusting.
Players who initiate and players who experiment mutual cooperation display activation in nucleus accumbens and other reward-related areas.
Punishing cheaters, in the trust game, activates the nucleus accumbens, a subcortical structure involved in pleasure.
One of the first application of utility theory to dopaminergic systems.
The first imaging study in game theory. Decision-makers are more likely to cooperate with real humans than with computers and cooperators have a significantly different brain activation in the two conditions.
The first genuine neuroeconomics paper. Lateral intraparietal area (LIP) activity predicts visual-saccadic decision-making, encode the desirabilities of making particular movements.
My critera are citations, influence, historical/theoretical importance and relevance for understanding decision-making



7/18/07

Altruism: a research program

Phoebe: I just found a selfless good deed; I went to the park and let a bee sting me.
Joey
: How is that a good deed?

Phoebe
:
Because now the bee gets to look tough in front of his bee friends. The bee is happy and I am not.
Joey:
Now you know the bee probably died when he stung you?
Phoebe:
Dammit!
- [From
Friends, episode 101]
Altruism is a lively research topic. The evolutionary foundations, neural substrates, psychological mechanisms, behavioral manifestations, formal modeling and philosophical analyses of cooperation constitute a coherent—although not unified—field of inquiry. See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in Penner et al. 2005; Hauser 2006; Fehr and Fischbacher 2002; Fehr and Fischbacher 2003. The nature of prosocial behavior, from kin selection to animal cooperation to human morality can be considered as a progressive Lakatosian research programs. Altruism has a great conceptual "sex-appeal" because it is mystery for two types of theoreticians: biologists and economists. They both wonder why an animal or an economic agent would help another: since these agents maximize fitness/utility, altruistic behavior is suboptimal. Altruims (help, trust, fairness, etc.) seems intuitively incoherent with economic rationality and biological adaptation, with markets and natural selection. Or is it?

In the 60's, biologists challenged the idea that natural selection is incompatible with altruism. Hamilton (1964a, 1964b) and Trivers (1971) showed that biological altruism makes sense. An animal X might behave altruistically toward another Y because they are genetically related: in doing so, X maximize the copying of its gene, since many of its genes will be hosted in Y. Thus the more X and Y are genetically related, the more X will be ready to help Y. This is kin altruism. Altruism can also be reciprocal: scratch my back and I'll scratch yours. Tit-for-tat, or reciprocal altruism also makes sense because by being altruistic, one may augments its payoff. X helps Y, but the next time Y will help X; thus it is better to help than not to help. In both cases, the idea is that altruism is a mean not an end. Others argue that more complex types of altruisms exists. For instance, X can help Y because Y already helped Z (indirect reciprocity). In this case, the tit-for-tat logic is extended to agents that the helper did not meet in the past. Generalized reciprocity (see this previous post) is another type of altruism: helping someone because someone helped you in the past. This altruism does not require memory or personal identification. X helps someone because someone else helped X. Finally, Strong reciprocity is the idea that humans display genuine altruism: strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Their proponents argue that it evolved through group selection.

Experimental economics and neuroeconomics also challenged the idea of rational, greedy, selfish actor (the Ayn Rand hero). Experimental game theory showed that, contrarily to orthodox game theory, subjects cooperate massively in prisoner’s dilemma (Ledyard, 1995; Sally, 1995). Rilling et al. showed that players enjoy cooperating. Players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas such as the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex (Rilling et al., 2002). In another experiment, the presentation of faces of intentional cooperators caused increased activity in reward-related areas (Singer et al. 2004). In the ultimatum game, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula (associated with negative emotional states like disgust or anger) is more active when unfair offers are proposed (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003). Subjects experiment this affective reaction to unfairness only when the proposer is a human being: the activation is significantly lower when the proposer is a computer. Moreover, the anterior insula activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers (Sanfey et al., 2003: 1756). Fehr and Fischbacher (2002) suggested that economic agents are inequity-averse and have prosocial preferences. Thus they modified the utility functions to account for behavioral (and now neural) data. In Moral Markets: The Critical Role of Values in the Economy, Paul Zak proposes a radically different conception of morality in economics:

The research reported in this book revealed that most economic exchange, whether with a stranger or a known individual, relies on character values such as honesty, trust, reliability, and fairness. Such values, we argue, arise in the normal course of human interactions, without overt enforcement—lawyers, judges or the
police are present in a paucity of economic transactions (...). Markets are moral in two senses. Moral behavior is necessary for exchange in moderately regulated markets, for example, to reduce cheating without exorbitant
transactions costs. In addition, market exchange itself can lead to an understanding of fair-play that can build social capital in nonmarket settings. (Zak, forthcoming)

See how this claim is similar to :

The two fundamental principles of evolution are mutation and natural selection. But evolution is constructive because of cooperation. New levels of organization evolve when the competing units on the lower level begin to cooperate. Cooperation allows specialization and thereby promotes biological diversity. Cooperation is the secret behind the open-endedness of the evolutionary process. Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection.
(Nowak, 2006)

Hence, biological and economic theorizing followed a similar path: they started first with the assumption that agents value only their own payoff; evidence suggested then that agents behave altruistically and, finally, theoretical models were amended and now incorporate different kinds of reciprocity.

So is it good news? Are we genuinely altruistic? First a precision: there is a difference between biological and psychological altruism, and the former does not entail the latter; biological altruism is about fitness consequencences (survival and reproduction), while psychological altruism is about motivation and intentions:

Where human behaviour is concerned, the distinction between biological altruism, defined in terms of fitness consequences, and ‘real’ altruism, defined in terms of the agent's conscious intentions to help others, does make sense. (Sometimes the label ‘psychological altruism’ is used instead of ‘real’ altruism.) What is the relationship between these two concepts? They appear to be independent in both directions (...). An action performed with the conscious intention of helping another human being may not affect their biological fitness at all, so would not count as altruistic in the biological sense. Conversely, an action undertaken for purely self-interested reasons, i.e. without the conscious intention of helping another, may boost their biological fitness tremendously (Biological Altruism, Stanford Encyclopedia of Philosophy; see also a forthcoming paper by Stephen Stich and the classic Sober & Wilson 1998).

The interesting question, for many researchers, is then: what is the link between biological and psychological altruism? A common view suggests non-human animals are biological altruists, while humans are also psychological atruists. I would like argue against this sharp divide and briefly suggest three things:
  1. Non-humans also display psychological altruism
  2. Human altruism is strongly influenced by biological motives
  3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

1. Non-humans also display psychological altruism


A discussed in a previous post, a recent research paper showed that rats exhibit generalized reciprocity: rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Although the authors of the paper take a more prudent stance, I consider generalized reciprocity as psychological altruism (remember, it can be both): rats cooperate because they "feel good", and that feeling is induced by cooperation, not by a particular agent. Hence their brain value cooperation (probably thanks to hormonal mechanisms similar to ours) in itself, even if there is no direct tit-for-tat. In the same edition of PLoS biology, primatologist Frans de Waal (2007) also argue that animals show signs of psychological altruism; it it particularly clear in an experiment (Warneken et al, again, in the same journal) that show that chimpanzees are ready to help unknown humans and conspecifics (hence ruling out kin and tit-for-tat altruism), even at a cost to themselves. Here is the description of the experiments:

In the first experiment, the chimpanzee saw a person unsuccessfully reach through the bars for a stick on the other side, too far away for the person, but within reach of the ape. The chimpanzees spontaneously helped the reaching person regardless of whether this yielded a reward, or not. A similar experiment with 18-month-old children gave exactly the same outcome. Obviously, both apes and young children are willing to help, especially when they see someone struggling to reach a goal. The second experiment increased the cost of helping. The chimpanzees were still willing to help, however, even though now they had to climb up a couple of meters, and the children still helped even after obstacles had been put in their way. Rewards had been eliminated altogether this time, but this hardly seemed to matter. One could, of course, argue that chimpanzees living in a sanctuary help humans because they depend on them for food and shelter. How familiar they are with the person in question may be secondary if they simply have learned to be nice to the bipedal species that takes care of them. The third and final experiment therefore tested the apes' willingness to help each other, which, from an evolutionary perspective, is also the only situation that matters. The set-up was slightly more complex. One chimpanzee, the Observer, would watch another, its Partner, try to enter a closed room with food. The only way for the Partner to enter this room would be if a chain blocking the door were removed. This chain was beyond the Partner's control—only the Observer could untie it. Admittedly, the outcome of this particular experiment surprised even me—and I am probably the biggest believer in primate empathy and altruism. I would not have been sure what to predict given that all of the food would go to the Partner, thus creating potential envy in the Observer. Yet, the results were unequivocal: Observers removed the peg holding the chain, thus yielding their Partner access to the room with food (de Waal)
(image from Warneken et al video)

2. Human altruism is strongly influenced by biological motives

In many cases, human altruism appear as a complex version of biological altruism (see Burnham & Johnson, 2005. The Biological and Evolutionary Logic of Human Cooperation for a review). For instance, Madsen et al. (2007) showed that humans behave more altruistically toward their own kin when there is a significant genuine cost (such as muscular pain), an attitude also mirrored in study with questionnaires (Stewart-Williams 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Other studies showed that facial similarity enhances trust (DeBruine 2002). In each cases, there is a mechanism whose function is to negotiate personal investments in relationships in order to promote the copying of genes housed either in people of—or people who seems to be of—our kin.

Many of these so called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton, Katok, and Zwick 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al. 1994). When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley and Fessler 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson, Nettle, and Roberts 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks (Bering, McLeod, and Shackelford 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff and Norenzayan in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban, DeScioli, and O'Brien 2007) showed that with a dozen participants, punishment expenditure tripled. Again, appareant altruism is instrumental in personal satisfaction. Other research suggest that altruism is also an advantage in sexual selection: "people preferentially direct cooperative behavior towards more attractive members of the opposite sex. Furthermore, cooperative behavior increases the perceived attractiveness of the cooperator" (Farrelly et al., 2007).

An interesting framework to understand altruims is Hardy (no relation with me) & Van Vugt (2006) theory of competitive altruism: "individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable." We need, however, a more general perspective.


3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

All organic beings are striving to seize on each place in the economy of nature - (Darwin, [1859] 2003, p. 90)

With Darwin, natural economy began to be understood with the conceptual tools of political economy. The division of labor, competition (“struggle” in Darwin’s words), trading, cost, the accumulation of innovations, the emergence of complex order from unintentional individual actions, the scarcity of resources and the geometric growth of populations are ideas borrowed from Adam Smith, Thomas Malthus, David Hume and other founders of modern economics. Thus, the economy of nature ceased to be an abstract representation of the universe and became a depiction of the complex web of interactions between biological individuals, species and their environment—the subject matter of ecology. Consequently, Darwin’s main contributions are his transforming biology into a historical science—like geology—and into an economic science.

I take the economy-of-nature principle to be a refinement of the natural selection principle: while it describes general features of the biosphere, it puts emphasis on the intersection between individual biographies and natural selection, and especially on decision-making. On the one hand, the decisions biological individuals make increase or decrease their fitness, and thus good decision-makers are more likely to propagate their genes. On the other hand, natural selection is likely to favor good decision-makers and to get rid of bad decision-makers. Thus, if our best descriptive theories of animal and human economic behavior indicate that all these agents have prosocial preferences and make altruistic decisions, then these preferences and decisions are not maladaptive and irrational. They must have an evolutionary and an economic payoff. Markets and natural selections requires cooperation, even if the deep motivations are partly selfish. Fairness, equity and honesty are social goods in the economy of nature, human and non-human.


  • Bateson, M., D. Nettle, and G. Roberts. 2006. Cues of being watched enhance cooperation in a real-world setting. Biology Letters 12:412-414.
  • Bering, J. M., K. McLeod, and T. K. Shackelford. 2005. Reasoning about Dead Agents Reveals Possible Adaptive Trends. Human Nature 16 (4):360-381.
  • Bolton, G. E., E. Katok, and R. Zwick. 1998. Dictator Game Giving: Rules of Fairness versus Acts of Kindness International Journal of Game Theory 27 269-299
  • Burnham, T. C., and D. D. P. Johnson. 2005. The Biological and Evolutionary Logic of Human Cooperation. Analyse & Kritik 27:113-135.
  • DeBruine, L. M. 2002. Facial resemblance enhances trust. Proc Biol Sci 269 (1498):1307-12.
  • de Waal FBM (2007) With a Little Help from a Friend. PLoS Biol 5(7): e190 doi:10.1371/journal.pbio.0050190
  • Farrelly, D., J. Lazarus, and G. Roberts. 2007. Altruists attract. Evolutionary Psychology 5 (2):313-329.
  • Fehr, E., and U. Fischbacher. 2002. Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal 112:C1-C33.
  • Fehr, Ernst, and Urs Fischbacher. 2003. The nature of human altruism. Nature 425 (6960):785-791.
  • Hamilton, W. D. 1964a. The genetical evolution of social behaviour. I. J Theor Biol 7 (1):1-16.
  • ———. 1964b. The genetical evolution of social behaviour. II. J Theor Biol 7 (1):17-52.
  • Hauser, Marc D. 2006. Moral minds : how nature designed our universal sense of right and wrong. New York: Ecco.
  • Ledyard, J. O. 1995. Public goods: A survey of experimental research. In Handbook of experimental economics, edited by J. H. Kagel and A. E. Roth: Princeton University Press.
  • Haley, K., and D. Fessler. 2005. Nobody’s watching? Subtle cues affect generosity in an anonymous economic game. Evolution and Human Behavior 26 (3):245-56.
  • Hoffman, E., K. Mc Cabe, K. Shachat, and V. Smith. 1994. Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior 7:346–380.
  • Kurzban, Robert, Peter DeScioli, and Erin O'Brien. 2007. Audience effects on moralistic punishment. Evolution and Human Behavior 28 (2):75-84.
  • Madsen, Elainie A., Richard J. Tunney, George Fieldman, Henry C. Plotkin, Robin I. M. Dunbar, Jean-Marie Richardson, and David McFarland. 2007. Kinship and altruism: A cross-cultural experimental study. British Journal of Psychology 98:339-359.
  • Penner, Louis A., John F. Dovidio, Jane A. Piliavin, and David A. Schroeder. 2005. Prosocial behavior: Multilevel Perspectives. Annual Review of Psychology 56 (1):365-392.
  • Okasha, Samir, "Biological Altruism", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2005/entries/altruism-biological/.
  • Stewart-Williams, Steve. 2007. Altruism among kin vs. nonkin: effects of cost of help and reciprocal exchange. Evolution and Human Behavior 28 (3):193-198.
  • Nowak, M. A. 2006. Five Rules for the Evolution of Cooperation. Science 314 (5805):1560-1563.
  • Oosterbeek, H., Randolph S., and G. van de Kuilen. 2004. Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7:171-188.
  • Rilling, J., D. Gutman, T. Zeh, G. Pagnoni, G. Berns, and C. Kilts. 2002. A neural basis for social cooperation. Neuron 35 (2):395-405.
  • Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196
  • Sally, D. 1995. Conversations and cooperation in social dilemmas: a meta-analysis of experiments from 1958 to 1992. Rationality and Society 7:58 – 92
  • Sanfey, A. G., J. K. Rilling, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. 2003. The neural basis of economic decision-making in the Ultimatum Game. Science 300 (5626):1755-8.
  • Shariff, A.F. , and A. Norenzayan. in press. God is watching you: Supernatural agent concepts increase prosocial behavior in an anonymous economic game. Psychological Science.
  • Singer, T., S. J. Kiebel, J. S. Winston, R. J. Dolan, and C. D. Frith. 2004. Brain responses to the acquired moral status of faces. Neuron 41 (4):653-62.
  • Sober, Elliott, and David Sloan Wilson. 1998. Unto others : the evolution and psychology of unselfish behavior. Cambridge, Mass.: Harvard University Press.
  • Stich, S. (forthcoming). Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson's Argument for Psychological Altruism, to appear in Biology and Philosophy.
  • Trivers, R. L. 1971. The Evolution of Reciprocal Altruism. Quarterly Review of Biology 46 (1):35.
  • Warneken F, Hare B, Melis AP, Hanus D, Tomasello M (2007) Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biol 5(7): e184 doi:10.1371/journal.pbio.0050184
  • Zak, P. J., ed. forthcoming. Moral Markets: The Critical Role of Values in the Economy. Princeton, N.J.: Princeton University Press.



7/11/07

Decision-Making: A Neuroeconomic Perspective

I put a new paper on my homepage :

Decision-Making: A Neuroeconomic Perspective

Here is the abstract:

This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality.

Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass. [PDF]

This paper is the first in my philosophical exploration of neuroeconomics, and I would gladly welcome your comments and suggestions for subsequent research. Email me at benoithv@gmail.com.



7/10/07

Neuropsychopharmacology textbook: this is your brain on drugs

I discovered (thanks to Mind Hacks) that the American College of Neuropsychopharmacology put online a text book on, well, Neuropsychopharmacology. An enormous source of information on how drugs affect the brain. Here is (from p. 120) a schema of dopaminergic systems:


(click to enlarge)


Visit the link below to read the online text book:



7/9/07

Testosterone and ultimatum behavior

In a recent research paper, Terry Burnham investigated whether testosterone modulates behavior in the ultimatum game. Proposers and responders had to split $40. Proposers could either offer $25 or $5 out of 40, i.e, a fair or an unfair offer. It turns out that men with a higher level of testosterone are more prone to reject low offers than low-testosterone man:



Given that testosterone is associated with competition and dominance, Burnham interprets these results as indicating that low offers are construed as a challenge, and that high-testosterone men reply more aggressively.

Another--but closely related--interpretation would suggest not that high-testosterone men see low offers as a challenge (it not clear, by the way, how that could be a challenge), but rather that they are more irritated by a lack of fairness. They are more prone to punish those who do not behave fairly toward them by rejecting their offer.

see also a report in The Economist.




see also a report in The Economist.



7/4/07

Call for Papers: 2007 NeuroPsychoEconomics Conference

[from NeuroPsychoEconomics Newsletter]






Call for Papers: 2007 NeuroPsychoEconomics Conference

Please be invited to submit a paper to the 2007 NeuroPsychoEconomics Conference in Vienna, Austria. The conference will he held from October 14-16, 2007 at the Austrian Academy of Sciences (Oesterreichische Akademie der Wissenschaften, Dr. Ignaz Seipel-Platz 2, 1010 Vienna, Austria).

Deadline for submissions is July 15, 2007.

The conference theme of 2007 is:

“Research in Neuroscience, Psychology, Business, and Economics –Towards a Discipline of Neuroeconomics”

Manuscripts passing the double-blind review process will be accepted for presentation at the conference.

Manuscripts should combine concepts from neuroscience and/or psychology with problems of business and economics. We want to emphasize, that papers can be submitted which are covering all three science fields, but also papers which are combining economical sciences with psychology or with neurosciences are of interest.

Empirical as well as conceptual manuscripts are welcome. Manuscripts can be written in English or German. The conference language will be English. Manuscripts submitted for the conference must not currently be under review, accepted for publication, or published elsewhere.

Please find the detailed “Call for Papers” on our website:
www.neuropsychoeconomics.org/e_callforpapers

We are looking forward to your submission!




7/3/07

The reciprocal rat

Research on reciprocity classically identified three cooperation mechanisms: 
  • kin reciprocity (A helps B because A and B are genetically related)
  • direct reciprocity (A helps B because B has helped A before--"tit for tat")
  • indirect reciprocity (A helps B because B has helped C before)
Recently, many suggested that we should also add a stronger type of reciprocity called, well,  "Strong reciprocity". Strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Strong reciprocity is a uniquely human phenomenon. Until now, only kin and direct reciprocity has been observed in animals. In PloS Biology,  Rutte and Taborsky showed that another reciprocity mechanism could be shared by humans and other animals is present in rats: generalized reciprocity. Contrarily to other kinds of reciprocity, generalized reciprocity does not requires individual identification: in kin, direct and indirect reciprocity, you need first to identify another agent as sharing genes with you, having helped you in the past, or having helped someone else in the past. Generalized reciprocity is more anonymous: since someone help you in the past, you are more willing to help someone in the future regardless of the past and futur agent's identity. People who found a coin in a phone booth are more likely to help a stranger pick up dropped papers than control subjects who had not previously found money. Since you don't need to track and identify other's behaviors,  generalized reciprocity is less cognitively demanding, and hence probably most common in nature. In Rutte and Taborsky's experiment, a rat can pull a stick fixed to a baited tray and produces food (an oat flake) for its 'partner' (another rat); the partner is rewarded but not the 'giver'. It turns out that rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Rats followed a " “anonymous generous tit-for-tat”. 

 


 




Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196

  • Fehr, E., & Rockenbach, B. (2004). Human altruism: economic, neural, and evolutionary perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1-25.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.



[Book review] Rediscovering Empathy. Agency, Folk Psychology, and the Human Sciences


Rediscovering Empathy

Agency, Folk Psychology, and the Human Sciences
by Karsten R. Stueber
MIT Press, 2006
Review by Benoit Hardy-Vallée, Ph.D. on Jul 3rd 2007
Volume: 11, Number: 27

Many philosophers and cognitive scientists are now familiar with a traditional debate between two accounts of folk-psychology (the intuitive framework of beliefs, desires and intentions we use everyday to understand each others). According to the first one--the theory-theory account, or "information-rich"--we apply a psychological theory to other's actions and infer, on that theoretical basis, the reasons that motivate their actions. To the contrary, the simulation account, or "information-poor", holds that folk-psychology is essentially imitative and imaginative: we use ourselves as simulators of others agents' mind in order to gain information about their reasons to act. The debate was particularly vigorous in the 90's (philosophers, psychologists, primatologists and cognitive scientists participated) but seemed to vanish in the recent years. In his new book, Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences, Karsten Stueber is not only putting forth a welcome revival of the contention, but is also deepening it.

Beside the mechanistic questions of how we process information in order to explain and predict actions, the disagreement between both parties is also epistemological. Simulationists do not only argue that we use different skills, but also that there is something radically different in the way we understand other agents (vs. the way we understand the rest of the physical and world). When we interpret other persons, we conceive them as minded creatures that have a first-person perspective, just like we do. When we interact with, or think about, other physical objects, we don't use our imagination to simulate their subjective point of view, since we don't take them to have one. Thus, for the simulationists, agents and objects are structurally different. There are no common inferential mechanisms that apply to both.

In his book, Stueber makes a strong case in favor of simulation, or more generally, empathy. His thesis is that empathy has a central epistemic role: it is the default mode of interpersonal understanding. We first and foremost comprehend actions by putting ourselves in someone else's shoes, not by relying on a psychological theory of human cognition. Cognizing other agents is essentially an 'engaged' task, not a 'detached' one: we use ourself--our emotions, sensations, and thoughts--as mindreading tools, not an external device such as theory. Stueber compares mindreading and judging whether someone is the same height as yourself. You can either use an external, neutral standard--a measuring tape for instance--or use yourself as a standard: see if your head and hers is at same level. In this case, we use a subjective, non-neutral and egocentric point of view.

Stueber draws a distinction between what he calls basic and reenactive empathy. The first one is a quasi-perceptual mechanism (implemented in so-called 'mirror neurons'): when we see someone scared, we can easily sense her feeling. We see that someone is scared, but not why. Understanding why requires a more complex kind of empathy or reenactment. This second type of empathy, realized through a deliberative process, allows us to understand the reasons of actions. Since thought is essentially contextual and indexical, understanding someone else's thoughts requires that we see others' thoughts as thoughts that, had they been ours in the same context, would give use reason to act. Thus it is by inner imitation that we really grasp others' intentions, not by theoretical deduction.

It is impossible, however, to do so without viewing each other as rational beings. Rational agency is a condition for reenactive empathy: when we take others to be normatively assessable, we can reconstruct the thought processes that govern their actions. By 'rational agency', Stueber does not imply that humans are good logicians or rational-choice theorists. A rational creature, he argues, is a creature whose assertions and actions are motivated by reasons, and whose reasons can be evaluated in the light of normative theories of rationality. Thus, all empirical studies of showing that humans do not comply with logic and rational-choice theory do not undermine the role of rational agency in folk-psychological interpretation. Hence empathy is inherently based on a rationality assumption, what other philosophers such as Davidson called the principle of charity: interpreting others' beliefs as coherent.

Having established empathy as the central mindreading device and rejected theory-theory and other detached accounts, Stueber goes on to claim that empathy also has a normative role: it justifies our beliefs-desires attributions. Here again, the author uses a vivid analogy. We can justify a prediction that it will rain tomorrow only by using an information-poor background: the barometer says so, and the barometer is a reliable tool. Thus the prediction is inductively justified. Similarly, as long as we are in the domain of psychological interpretation, empathy is a reliable predictive tool that doesn't require a rich theoretical background. Of course, as many objected, empathy might be fallible, since it can be influenced by cultural and social background. Yet empathetic reenactement is still, Stueber contends, the principal mindreading strategy. It is fallible but it can be supplemented with auxiliary information.

Rediscovering Empathy is not just another book about folk-psychology. It is a systematic enquiry into the structure and function of mindreading that goes beyond the traditional exposition of recent cognitive theories. As Stueber shows, the debate between the engaged and detached conception of interpretation is not new and has roots in 19th century discussions of hermeneutic Verstehen (understanding) and aesthetic Einfühlung (empathy). The nature and function of empathy is relevant for a diverse array of empirical and theoretical inquiries: beyond cognitive science and philosophy of mind, the debate concerning the nature of folk-psychological understanding impacts upon foundational debates in hermeneutics, aesthetics, anthropology, neuroscience, philosophy of language and philosophy of social science (mainly philosophy of history). Interpreting other agents, artifacts, texts, historical events or different cultures requires some kind of mechanisms that reliably indicates why individuals do what they do.

The greatest strength of this book is its ability to guide the reader through many important philosophical and scientific debates: the simulation vs. theory-theory debate, the rationality debate, the significance of mirror neurons and the nature of historical explanation, principally. In every case, the author uses an acute terminology and provides a clear presentation of competing theories, their empirical basis, their conceptual significance and their position in the history of thought. He exposes complex problems but never looses the reader (except in the last chapters, where the arguments is less clear). One notes also the unity and coherence of the book. The only problem with the book is that the author takes certain claims (e.g., the contextuality of thought) to be purely 'conceptual': he accepts them without much justification and thus it seems more dogmatic than conceptual.

This book will be of interest for any scholars interested in interpretation, generally speaking, but might be more accessible for philosophers of mind and social science. Cognitive scientists, social psychologists and social scientists will also found many discussions in the book relevant for their field.

Note: The introduction of the book can be freely downloaded on the publisher's website:

http://mitpress.mit.edu/books/chapters/026219550Xintro1.pdf