Natural Rationality | decision-making in the economy of nature
Showing posts with label emotions. Show all posts
Showing posts with label emotions. Show all posts

3/11/08

Why Neuroeconomics Needs a Concept of (Natural) Rationality

ResearchBlogging.orgNeuroeconomists (more than “decision neuroscientists”) often report their finding as strong evidence against the rationality of decision-makers. In the case of cooperation it is often claimed that emotions motivate cooperation since neural activity elicited by cooperation overlaps with neural activity elicited by hedonic rewards (Fehr & Camerer, 2007). Also, when subjects have to choose whether or not they would purchase a product, desirable products cause activation in the nucleus accumbens (associated with anticipation of pleasure). However, if the price is seen as exaggerated, activity is detected in the insula (involved in disgust and fear; Knutson 2007).

The accumulation of evidence about the engagement of affective areas in decison-making is undisputable, and seems to make a strong case against a once pervasive “rationalist” vision of decision-making in cognitive science and economics. This is not, however, a definitive argument for emotivism (we choose with our "gut feelings") and irrationalism. For at least three reasons (methodological, empirical and conceptual), these findings should not be seen as supporting an emotivist account.

First, characterizing a brain area as “affective” or “emotional” is misleading. There is no clear distinction, in the brain, between affective and cognitive areas. For instance, the anterior insula is involved in disgust, but also in disbelief (Harris et al., 2007). A high-level task such as cognitive control (e.g. holding items in working memory in a goal-oriented task) requires both “affective” and “cognitive” areas (Pessoa, 2008). The affective/cognitive distinction is a folk-psychological one, not a reflection of brain anatomy and connectivity. There is a certain degree of specialization, but generally speaking any task recruits a wide arrays of areas, and each area is redeployed in many tasks. In complex being like us, so-called “affective” areas are never purely affective: they always contribute to higher-level cognition, such as logical reasoning (Houde & Tzourio-Mazoyer, 2003). Similarly, while the amygdala has been often described as a “fear center”, its function is much more complex, as it modulates emotional information, react to unexpected stimuli and is heavily recruited in visual attention, a “cognitive” function. It is therefore wrong to consider “affective” areas as small emotional agents that are happy or sad and make us happy of sad. Instead of employing folk-psychological categories, their functional contribution should be understood in computational terms: how they process signals, how information is routed between areas and how they affect behavior and thought.

Second, even if there are affective areas, they are always complemented or supplemented by “cognitive” ones: the dorsolateral prefrontal cortex (DLPFC) for instance (involved in cognitive control and goal maintenance), is recruited in almost all decision-making task, and has been shown to be involved in norm-compliant behavior and purchasing decisions. In the ultimatum game, beside the anterior insula, two other areas are recruited: the DLPFC and the anterior cingulate cortex (ACC), involved in cognitive conflict and emotional modulation. Explainiations of ultimatum decisions spell out neural information-processing mechanisms, not “emotions”.

Check for instance the neural circuitry involved in cognitive control: you would think it is only prefrontal areas, but as it turns out, "cognitive" and "affective" area sare required for this competence:


[Legend: This extended control circuit contains traditional control areas, such as the anterior cingulate cortex (ACC) and the lateral prefrontal cortex (LPFC), in addition to other areas commonly linked to affect (amygdala) and motivation (nucleus accumbens). Diffuse, modulatory effects are shown in green and originate from dopamine-rich neurons from the ventral tegmental area (VTA). The circuit highlights the cognitive–affective nature of executive control, in contrast to more purely cognitive-control proposals. Several connections are not shown to simplify the diagram. Line thickness indicates approximate connection strength. OFC, orbitofrontal cortex.From Pessoa, 2008]

As Michael Anderson pointed out in a series of papers (2007a and b, among others), there is many-to-many mapping between brain functions and cognitive functions. So the concept of "emotional areas" should be banned from neuroeconomics vocabulary before it is too late.

Third, a point that has been neglected by many research about decision-making neural activation of a particular brain area is always explanatory with regard to its contribution in understanding personal-level properties. If we learn that the anterior insula react to unfair offers, we are not singling out the function of this area, but explaining how the person’s decision is responsive to a particular type of valuation. The basic unit of analysis of decisions is not neurons, but judgments. We may study sub-judgmental (e.g. neural) mechanisms and how they contribute to judgment formation; or we may study supra-judgmental mechanisms (e.g. reasoning) and how they articulate judgments. Emotions, as long as they are understood as affective reactions, are not judgments: they either contribute to judgments or are construed as judgments. In both case, the category “emotions” seems superfluous for explaining the nature of the judgment itself. Thus, if judgments are the basic unit of analysis, brain areas are explanatory insofar as they make explicit how individuals arrive at a certain judgment, how it is implemented, etc: what kind of neural computations are carried out? Take, for example, cooperation in the prisoner's dilemma. Imaging studies show that when high-psychopathy and low-psychopathy subjects choose to cooperate, different neural activity is observed: the former use more prefrontal areas than the latter, indicating that cooperation is more efforful (see this post). This is instructive: we learn something about the information- processing not about "emotions" or "reason".

In the end, we want to know how these mechanisms fix beliefs, desires and intentions: neuroeconomics can be informative as long as it aims at deciphering human natural rationality.


References
  • Anderson, M. L. (2007a). Evolution of Cognitive Function Via Redeployment of Brain Areas. Neuroscientist, 13(1), 13-21.
  • Anderson, M. L. (2007b). The Massive Redeployment Hypothesis and the Functional Topography of the Brain. Philosophical Psychology, 20(2), 143 - 174.
  • Fehr, E., & Camerer, C. F. (2007). Social Neuroeconomics: The Neural Circuitry of Social Preferences. Trends Cogn Sci.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology,
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Spitzer, M., Fischbacher, U., Herrnberger, B., Gron, G., & Fehr, E. (2007). The Neural Signature of Social Norm Compliance. Neuron, 56(1), 185-196.




1/29/08

Moods and decision-making



In the last issue of Judgment and Decision Making (free online a sharp study by de Vries et al. illustrates how mood affect cognitive processing. After watching either a clip from the Muppet's Show or a clip from Schindler's list, participants played the Iowa Gambling Task (see description here). People who watch the funny clip (the Muppets, as you might have guest) scored better:
[at an early stage of the game] after experiencing the first losses in the bad decks, participants in a happy mood state outperformed participants in a sad mood state (de Vries et al., 2008 p. 48)
After a couple of trials, however, sad and happy subjects scored identically:


The authors do not argue that being in good mood warrants success, but suggest that certain moods may have adaptive value in certain situations (when the optimal choice requires analytical thinking, affective reactions seems more distracting). So next time you have a test, choose the right mood !


References



11/27/07

A Brief Comparison Between Accounts of emotions

What is the difference between shame and anger?

  • Behaviorism: we tend to behave differently when we are angry and when we are ashamed (psychological), a difference in the descriptions of two different kinds of awkward situations (analytic).
  • Existentialism: two solutions to two problems.
  • Cognitivism: Difference in content: in shame one feels responsible, while in anger one expresses an aversive reaction.
  • Non-propositional cognitivism: anger and shame direct attention to different features (e.g. problems vs. social context)
  • Evaluative: shame, but not anger, “track” social values and norms
  • Empiricism: difference in our sensations, feelings;
  • Social-constructionism: two different transitory roles: the ‘angry agent’ and the ‘ashamed agent’
  • Social-intuitionism: different intuitions triggered by different situations
  • Evolutionary approaches: adaptations to individual vs. collective problem-solving; or affect programs (natural kind) vs. social emotions (normative kinds)
  • Neo-empiricism: different neural structures, different processing of bodily states, different integrations of affective and cognitive information.
[this is inspired from Solomon, R. (1998). Emotions. In E. Craig (Ed.), Routledge Encyclopedia of Philosophy. London: Routledge. http://www.rep.routledge.com/article/V012SECT1.]



11/7/07

Affect and metacognition in the Ultimatum

Blogging on Peer-Reviewed Research

Andrade and Ho submitted their subjects to the following ultimatum game: Proposers keep 50% or 75% of the "pie" to be divided; responders can change the size of the pie (from 0 to $1). When proposers were told that responders just watched a funny sitcom, they tended to make 'unfair' offers (keep 75%). They were, according to the authors, expecting responders to be in a good mood and thus to be more tolerant to unfairness. There was no such effect when they were told that the receiver watched an anger-inducing movie clip. The effect also disappears when proposers know that receiver know that proposers had this information. In other words, proposers in this setting are emotional strategists and rely on affective metacognition to devise the best move.





10/12/07

Self-control is a Scarce Resource

We all have a limited quantity of energy to allow to self-control:

In a recent study, Michael Inzlicht of the University of Toronto Scarborough and colleague Jennifer N. Gutsell offer an account of what is happening in the brain when our vices get the better of us.

Inzlicht and Gutsell asked participants to suppress their emotions while watching an upsetting movie. The idea was to deplete their resources for self-control. The participants reported their ability to suppress their feelings on a scale from one to nine. Then, they completed a Stroop task, which involves naming the color of printed words (i.e. saying red when reading the word “green” in red font), yet another task that requires a significant amount of self-control.

The researchers found that those who suppressed their emotions performed worse on the Stroop task, indicating that they had used up their resources for self-control while holding back their tears during the film.

An EEG, performed during the Stroop task, confirmed these results. Normally, when a person deviates from their goals (in this case, wanting to read the word, not the color of the font), increased brain activity occurs in a part of the frontal lobe called the anterior cingulate cortex, which alerts the person that they are off-track. The researchers found weaker activity occurring in this brain region during the Stroop task in those who had suppressed their feelings. In other words, after engaging in one act of self-control this brain system seems to fail during the next act.
http://www.psychologicalscience.org/media/releases/2007/inzlicht.cfm
(via Cognews)
  • Inzlicht, M., & Gutsell, J. N. (in press). Running on empty: Neural signals for self-control failure. Psychological Science. (preprint)




10/10/07

Fairness and Schizophrenia in the Ultimatum

For the first time, a study look at schizophrenic patient behavior in the Ultimatum Game. Other studies of schizophrenic choice behavior revealed that they have difficulty in decisions under ambiguity and uncertainty (Lee et al, 2007), have a slight preference for immediate over long-term rewards, (Heerey et al, 2007), exhibit "strategic stiffness" (sticking to a strategy in sequential decision-making without integrating the outcomes of past choices; Kim et al, 2007), perform worse in the Iowa Gambling Task (Sevy et al. 2007)

A research team from Israel run a Ultimatum experiment with schizophrenic subjects (plus two control group, one depressive, one non-clinical). They had to split 20 New Israeli Shekels (NIS) (about 5 US$). Although schizophrenic patients' Responder behavior was not different from control group, their Proposer behavior was different: they tended to be less strategic.

With respect to offer level, results fall into three categories, fair (10 NIS), unfair (less than 10 NIS), and hyper-fair (more than 10 NIS). Schizophrenic patients tended to make less 'unfair' offer, and more 'hyper-fair' offer. Men were more generous than women.

According to the authors,

for schizophrenic Proposers, the possibility of dividing the money evenly was as reasonable as for healthy Proposers, whereas the option of being hyper-fair appears to be as reasonable as being unfair, in contrast to the pattern for healthy Proposers.
Agay et al. also studied the distribution of Proposers types according to their pattern of sequential decisions (how their second offer compared to their first). They identified three types:
  1. "‘Strong-strategic’ Proposers are those who adjusted their 2nd offer according to the response to their 1st offer, that is, raised their 2nd offer after their 1st one was rejected, or lowered their 2nd offer after their 1st offer was accepted.
  2. Weak-strategic’ Proposers are those who perseverated, that is, their 2nd offer was the same as their 1st offer.
  3. Finally, ‘non-strategic’ Proposers are those who unreasonably reduced their offer after a rejection, or raised their offer after an acceptance."
20% of the schizoprenic group are non-strategic, while none of the healthy subjects are non-strategic.


the highest proportion of non-strategic Proposers is in the schizophrenic group
The authors do not offer much explication for these results:

In the present framework, schizophrenic patients seemed to deal with the cognition-emotion conflict described in the fMRI study of Sanfey et al. (2003) [NOTE: the authors of the first neuroeconomics Ultimatum study] in a manner similar to that of healthy controls. However, it is important to note that the low proportion of rejections throughout the whole experiment makes this conclusion questionable.
Another study, however, shows that "siblings of patients with schizophrenia rejected unfair offers more often compared to control participants." (van ’t Wout et al, 2006, chap. 12), thus suggesting that Responder behavior might be, after all, different in patient with a genetic liability to schizophrenia. Yet another unresolved issue !

Related Posts

Reference
  • Agay, N., Kron, S., Carmel, Z., Mendlovic, S., & Levkovitz, Y. Ultimatum bargaining behavior of people affected by schizophrenia. Psychiatry Research, In Press, Corrected Proof.
  • Hamann, J., Cohen, R., Leucht, S., Busch, R., & Kissling, W. (2007). Shared decision making and long-term outcome in schizophrenia treatment. The Journal of clinical psychiatry, 68(7), 992-7.
  • Heerey, E. A., Robinson, B. M., McMahon, R. P., & Gold, J. M. (2007). Delay discounting in schizophrenia. Cognitive neuropsychiatry, 12(3), 213-21.
  • Hyojin Kim, Daeyeol Lee, Shin, Y., & Jeanyung Chey. (2007). Impaired strategic decision making in schizophrenia. Brain Res.
  • Lee, Y., Kim, Y., Seo, E., Park, O., Jeong, S., Kim, S. H., et al. (2007). Dissociation of emotional decision-making from cognitive decision-making in chronic schizophrenia. Psychiatry research, 152(2-3), 113-20.
  • Mascha van ’t Wout, Ahmet Akdeniz, Rene S. Kahn, Andre Aleman. Vulnerability for schizophrenia and goal-directed behavior: the Ultimatum Game in relatives of patients with schizophrenia. (manuscript), from The nature of emotional abnormalities in schizophrenia: Evidence from patients and high-risk individuals / Mascha van 't Wout, 2006, Proefschrift Universiteit Utrecht.
  • McKay, R., Langdon, R., & Coltheart, M. (2007). Jumping to delusions? Paranoia, probabilistic reasoning, and need for closure. Cognitive neuropsychiatry, 12(4), 362-76.
  • Sevy, S., Burdick, K. E., Visweswaraiah, H., Abdelmessih, S., Lukin, M., Yechiam, E., et al. (2007). Iowa Gambling Task in schizophrenia: A review and new data in patients with schizophrenia and co-occurring cannabis use disorders. Schizophrenia Research, 92(1-3), 74-84.



9/7/07

A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.





8/5/07

Greed—for lack of a better word—is not necessarily good: the other Adam Smith and the economics of altruism

In one of the most quoted passage of the Wealth of Nations, Adam Smith argue that economic self-interest leads to collective optima:

It is not from the benevolence of the butcher, the brewer or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. Nobody but a beggar chooses to depend chiefly upon the benevolence of their fellow-citizens.
Everybody will remember the famous Gordon Gekko's speech in Oliver Stone's Wall Street (1987):




The point is, ladies and gentlemen, that greed—for lack of a better word—is good. Greed is right. Greed works. Greed clarifies, cuts through, and captures the essence of the evolutionary spirit. Greed, in all of its forms—greed for life, for money, for love, knowledge—has marked the upward surge of mankind.

Things may not be that simple. While the standard Homo Economicus model represents agents as exclusively motivated by their material self-interest, economic theories of fairness put forth the picture of Homo Reciprocans, an agent whose utility function incorporates social parameters (Bowles & Gintis, 2002; see Fehr and Schmidt, 2003, for a review). Economic theories of fairness fall into two categories: outcome-based models and intention-based models. The former explains fairness as the product of players’ aversion to inequity (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999; see also this post). Players are sensible to the distributive consequences of strategic interactions and prefer resources allocations that reduce inequity: they negatively value a discrepancy between their own payoff and an equitable payoff (whether it’s the mean payoff or another player’s payoff). The latter explains fairness as the product of players’ reciprocation of perceived kindness or unkindness (Rabin, 1993 ; Dufwenberg & Kirchsteiger, 2004).). More than the outcome of an interaction, fairness is motivated by the attributed intention. For instance, in a ultimatum where the proposer’s behavior is restricted to two options (50/50 and 80/20 split), the second option is the most rejected; when the proposer’s options are 20/80 and 80/20, however, the first option is rejected less often (Falk, Fehr & Fischbacher 2003). Decision-makers value differently the same option whether it is perceived as an intention to be fair (valued positively) or not (negatively). Since both parameters appear to be important, many models integrate both intentions and outcomes (Fehr & Schmidt, 2003; Falk & Fischbacher, 2006).
A common feature of these models is the preservation of the optimality assumption: although they all suggest that standard utility function should incorporate different parameters, they do not reject the idea that agents are internally rational: they maximize a non-classical utility function.

Hence it is not surprising that contemporary research is more interested by the "first" Adam Smith, who wrote in the Theory of Moral Sentiments:
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
In "Adam Smith, Behavioral Economist", Ashraf et al. (2005, The Journal of Economic Perspectives, 19, 131-145) discusses the relevance of Smith for experimental economics. In "The Two Faces of Adam Smith" (Southern Economic Journal, 65, 1-19), another Smith (Vernon) analyses the dual nature of Smith's (Adam) writing.

Finally, (found thanks to Mind Hacks) there is an excellent paper in the last Scientific American on the economics of fairness and other moral sentiments:

Is Greed Good?
Economists are finding that social concerns often trump selfishness in financial decision making, a view that helps to explain why tens of millions of people send money to strangers they find on the Internet
By Christoph Uhlhaas
There will be a conference a conference to commemorate the 250th anniversary of The Theory of Moral Sentiments in 2009 in Oxford (see CFP on PhilEcon website).

References
  • Ashraf, N., Camerer, C. F., & Loewenstein, G. (2005). Adam Smith, Behavioral Economist. The Journal of Economic Perspectives, 19, 131-145.
  • Bolton, G. E., & Ockenfels, A. (2000). ERC: A Theory of Equity, Reciprocity, and Competition. The American Economic Review, 90(1), 166-193.
  • Bowles, S., & Gintis, H. (2002). Behavioural science: Homo reciprocans. Nature, 415(6868), 125-128.
  • Bowles, S., & Gintis, H. (2004). The evolution of strong reciprocity: cooperation in heterogeneous populations. Theoretical Population Biology, 65(1), 17-28.
  • Dufwenberg, M., & Kirchsteiger, G. (2004). A theory of sequential reciprocity. Games and Economic Behavior, 47(2), 268-298.
  • Falk, A., Fehr, E., & Fischbacher, U. (2003). On the Nature of Fair Behavior. Economic Inquiry, 41(1), 20-26.
  • Falk, A., & Fischbacher, U. (2006). A theory of reciprocity. Games and Economic Behavior, 54(2), 293-315.
  • Fehr, E., & Fischbacher, U. (2002). Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal, 112, C1-C33.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1-25.
  • Fehr, E., & Rockenbach, B. (2004). Human altruism: economic, neural, and evolutionary perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Fehr, E., & Schmidt, K. (2003). Theories of Fairness and Reciprocity – Evidence and Economic Applications. In M. Dewatripont, L. Hansen & S. Turnovsky (Eds.), Advances in Economics and Econometrics - 8th World Congress (pp. 208-257).
  • Fehr, E., & Schmidt, K. M. (1999). A Theory Of Fairness, Competition, and Cooperation. Quarterly Journal of Economics, 114(3), 817-868.
  • Rabin, M. (1993). Incorporating Fairness into Game Theory and Economics. The American Economic Review, 83(5), 1281-1302.
  • Smith, V. L. (1998). The Two Faces of Adam Smith. Southern Economic Journal, 65, 1-19.




8/3/07

Kahneman and Sunstein on moral psychology and institutions

What does a professor of law and political Science and a Nobel-prize winning economic psychologist/behavioral economist can write about? The interplay of cognition and institution, of course ! In a paper posted on the SSRN website, Kahneman and Sunstein discuss how the combination of dual-process theories of cognition (the idea that we have a fast and intuitive "System I" and a deliberative "System II") and research on moral intuitions can help understanding institutional decision-making:

Moral intuitions operate in much the same way as other intuitions do; what makes the moral domain is distinctive is its foundations in the emotions, beliefs, and response tendencies that define indignation. The intuitive system of cognition, System I, is typically responsible for indignation; the more reflective system, System II, may or may not provide an override. Moral dumbfounding and moral numbness are often a product of moral intuitions that people are unable to justify. An understanding of indignation helps to explain the operation of the many phenomena of interest to law and politics: the outrage heuristic, the centrality of harm, the role of reference states, moral framing, and the act-omission distinction. Because of the operation of indignation, it is extremely difficult for people to achieve coherence in their moral intuitions. Legal and political institutions usually aspire to be deliberative, and to pay close attention to System II; but even in deliberative institutions, System I can make some compelling demands.
[found thanks to The Brooks blog]

On dual-process theories of reasoning, see Stanovich , Keith E. and West, Richard F. (2000) Individual Differences in Reasoning: Implications for the Rationality Debate?.(BBS online archive).


References:
  • Kahneman, Daniel and Sunstein, Cass R., "Indignation: Psychology, Politics, Law" (July 2007). U of Chicago Law & Economics, Olin Working Paper No. 346 Available at SSRN: http://ssrn.com/abstract=1002707



7/30/07

A glimpse at the evolution of the fearing and trusting brain

Together with other mechanisms, the amygdala is involveld in a complex neural circuitry that transforms photons hitting your eyes into the feeling that "Mom is mad at me because I break her favorite vase". Often referred to as the fear center, the amygdala is more like an online supervisory system that sets levels of alert. Many of its activities are of a social nature. Explicit and implicit distrust of faces elicits amygdala activation (Winston et al., 2002), while trust is increased with amygdala impairment (Adolphs et al., 1998). Moreover, the trust enhancing effect of oxytocin is mediated by amygdalar modulation: oxytocin reduces fear and hence allows trusting. In a nutshell, emotional memorization, learning and modulation performed by the amygdala obeys the following flowchart:


(from Schumann 1998)

A subpart of the amygdala, the lateral nucleus, processes information about social stimuli (such as facial expression). Autistic individuals tend to have impaired lateral nucleus, which makes sense if this nucleus is an important social-cognitive device (autistic subjects perform poorly in task that involves mental states attribution or other social inferences).According to Emery and Amaral (2000), inputs form the visual neocortex cortex enters the amygdala through the lateral nucleus, where its "emotional meaning" is attributed (I know, it is simplification...); the basal nucleus adds information about the social context. Hence this nucleus acts as a sensory integrator (LeDoux, 2000).

In a new paper in American Journal of Physical Anthropology, Barger et al. studied the relative size of different nuclei of the amygdala in different primates (humans, chimpanzee, bonobo, gorilla, etc.). The study revealed that the human lateral nucleus represents a larger fraction of the amygdala:






The authors conclude:

The large size of the human L [lateral nuclei] may reflect the proliferation of the temporal lobe over the course of hominid evolution, while the inverse may be true of the gorilla. The smaller size of the orangutan AC [amygdaloid complex] and BLD [Baso-lateral division] may be related to the diminished importance of interconnected limbic structures in this species. Further, there is some evidence that the orangutan, which exhibits one of the smallest group sizes on the contin- uum of primate sociality, may also be distinguished neuroanatomically from the other great apes, suggesting that social pressures may play a role in the development of the AC in association with other limbic regions.
Living in large groups thus may have shaped the evolution of emotional processing capacities of our brains. In the economy of nature, negotiating our way in a complex social world requires accute and specialized cognitive capacities in order to cooperate, trust, reciprocate, etc. This research show the potentials of evolutionary cognitive neuroscience (see this post).


References


  • Adolphs R, Tranel D, Damasio AR (1998) The human amygdala in social judgment. Nature 393: 470-474.
  • Barger, N. Stefanacci, L., & Semendeferi, K. (2007) A comparative volumetric analysis of the amygdaloid complex and basolateral division in the human and ape brain. American Journal of Physical Anthropology.
  • Emery and Amaral, 2000 N.J. Emery and D.G. Amaral, The role of amygdala in primate social cognition. In: R.D. Lane and L. Nadel, Editors, Cognitive Neuroscience of Emotion, Oxford Univ. Press, New York (2000), pp. 156–191.
  • LeDoux JE (2000) Emotion circuits in the brain. Annu Rev Neurosci 23: 155-184
  • Schumann, J.A. 1998. Language Learning. Vol. 48 Issue s1 Page ix-326
  • Winslow JT, Insel TR (2004) Neuroendocrine basis of social recognition. Curr Opin Neurobiol 14: 248-253



7/23/07

Ten major ideas and findings in behavioral decision research in the last 50 years

  1. judgment can be modeled
  2. bounded rationality
  3. to understand decision making, understanding tasks is more important than understanding people
  4. levels of aspiration or reference points and loss aversion
  5. heuristic rules
  6. adding and the importance of simple models
  7. the search for confirmation
  8. the evasive nature of risk perception
  9. the construction of preference
  10. the roles of emotions, affect, and intuition.

from:



The selective impairment of prosocial sentiments and the moral brain


Philosophers often describes the history of philosophy as a dispute between Plato (read: idealism/rationalism) and Aristotle (read:materialism/empiricism). It is of course extremely reductionist since many conceptual and empirical issues where not addressed in Ancient Greece, but there is a non-trivial interpretation of the history of thought according to which controversies often involves these two positions. In moral philosophy and moral psychology, however, the big figures are Hume and Kant. Is morality based on passions (Hume) or reasons (Kant)? This is another simplification, but again it frames the debate. In the last issue of Trends in Cognitive Science(TICS), three papers discusses the reason/emotions debate but provides more acute models.

Recently (see this previous post), Koenig and other collaborators (2007b) explored the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning and showed that they tend to rely a little more on a 'utilitarian' scheme (cost/benefit), and less on a deontological scheme (moral do's and don'ts ), thus suggesting that emotions are involved in moral deontological judgement. These patients, however, were also more emotional in the Ultimatum game, and rejected more offers than normal subjects. So are they emotional or not? In the first TICS paper, Moll and de Oliveira-Souza review the Koenig et al. (2007a) experiment and argue that neither somatic markers nor dual-process theory explains these findings. They propose that a selective impairment of prosocial sentiments explains why the same patient are both less emotional in moral dilemma but more emotional in economic bargaining. These patients can feel less compassion but still feel anger. In a second paper, Greene (author of the research on the trolley problems, see his homepage) challenge this interpretation and put forward his dual-process view (reason-emotion interaction). Moll and de Oliveira-Souza reply in the third paper. As you can see, there is still a debate between Kant and Hume, but cognitive neuroscience provides new tools for both sides of the debates, and maybe even a blurring of these opposites.


References



7/11/07

Decision-Making: A Neuroeconomic Perspective

I put a new paper on my homepage :

Decision-Making: A Neuroeconomic Perspective

Here is the abstract:

This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality.

Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass. [PDF]

This paper is the first in my philosophical exploration of neuroeconomics, and I would gladly welcome your comments and suggestions for subsequent research. Email me at benoithv@gmail.com.



3/28/07

A Neuropolitic look at political psychology

In Politics on the Brain: An fMRI Investigation, (a paper forthcoming in Social Neuroscience) Knutson et al. shows that political preferences (whether you prefer George Bush or Hilary Clinton) recruits 2 different neural circuits: one rapid, stereotypic, and emotional - ventromedial PFC and amygdala- and the other more deliberative - anterior prefrontal activations. When subjects were shown a image of a politician faces, both systems fire. Thus it seems that liking/disliking a politician is an affective reaction modulated by other knowledge, probably political values. The strength of affiliation with a party (in this case, Democrat vs. Republican) correlated negatively with PFC activation: thus emotional markers are not principally signs of political orientation, but signs of personal affiliation. Political orientation may modulate personal orientation when, for instance, one agree with a party without endorsing its leader's opinions. Somewhere in Man's Fate, Andre Malraux wrote (I quote approximatively, I cannot find the original quote): "men that follows ideas, in the end, always follows individuals". Most of our cognition is affective, and most of our affections are social. This is why you will always see politician shaking hands, kissing babies and smiling: they recruit brain structures involved in social-affective cognition. Political education, then, should consist in learning how to override these emotional reaction, and thus research on emotional learning are important for democracy.
Another related study might be of interest for neuropolitics int he last edition of Social Cognitive and Affective Neuroscience. An article on the neural mechanisms of social fear transmission concludes that learned fear can be "as powerful as fears originating from direct experiences." Thus emotional education (a form of emotional learning) can induce genuine emotions - at least fear.
Together, these studies suggest that "affective political cognition" could be not only theoretically interesting, but highly important for policy making.