Natural Rationality | decision-making in the economy of nature
Showing posts with label valuation. Show all posts
Showing posts with label valuation. Show all posts

9/29/07

How is (internal) irrationality possible?

Much unhappiness (...) has less to do with not getting what we want, and more to do with not wanting what we like.(Gilbert & Wilson, 2000)

Yes, we should make choices by multiplying probabilities and utilities, but how can we possibly do this if we can’t estimate those utilities beforehand? (Gilbert, 2006)

When we preview the future and prefeel its consequences, we are soliciting advice from our ancestors. This method is ingenious but imperfect. (Gilbert, et al. 2007)


Although we easily and intuitively assess each other’s behavior, speech, or character as irrational, providing a non-trivial account of irrationality might be tricky (something we philosophers like to deal with!) Let’s distinguish internal and external assessment of rationality: an internal (or subjective) assessment of rationality is an evaluation of the coherence of intentions, actions and plans. An external (or objective) assessment of rationality is an evaluation of the effectiveness of a rule or procedure. It assesses the optimality of a rule for achieving a certain goal. An action can be rational from the first perspective but not from the second one, and vice versa. Hence subjects’ poor performance in probabilistic reasoning can be internally rational without being externally rational: the Gambler’s fallacy is and will always be a fallacy: it is possible, however, that fallacious reasoners follow rational rules, maximizing an unorthodox utility function. Consequently, it is easy to understand how one can be externally irrational, but less easy to make sense of internal irrationality.

An interesting suggestion comes from hedonic psychology, and mostly Dan Gilbert’s research: irrationality is possible if agents fail to want things they like. Gilbert research focuses on Affective Forecasting, i.e., the forecasting of one's affect (emotional state) in the future (Gilbert, 2006; Wilson & Gilbert, 2003): anticipating the affective valence, intensity, duration and nature of specific emotions. Just like Tversky and Kahneman studied biases in probabilistic reasoning, he and his collaborator study biases in hedonistic reasoning.

In many cases, for instance, people do not like or dislike an event as much as they thought they would. They want things that do not promote welfare, and not want things that would promote their welfare. This what Gilbert call “miswanting”. We miswant, explain Gilbert, because of affective forecasting biases.

Take for instance impact biases: subject overestimate the length (durability bias) or intensity (intensity bias) of future emotive states (Gilbert et al., 1998):

“Research suggests that people routinely overestimate the emotional impact of negative events ranging from professional failures and romantic breakups to electoral losses, sports defeats, and medical setbacks”. (Gilbert et al., 2004).

They also underestimate the emotional impact of positive events such as winning a lottery (Brickman et al., 1978): newly rich lottery winners rated their happiness at this stage of their life as only 4.0, (on a 6-point scale, 0 to 5) which does not differ significantly from the rating of the control subjects. Also surprising to many people is the fact that paraplegics and quadriplegics rated their lives at 3.0, which is above the midpoint of the scale (2.5). In another study, Boyd et al., (1990) solicited the utility of life with a colostomy from several different groups: patients who had rectal cancer and who had been treated by radiation, patients who had rectal cancer and who had been treated by a colostomy, physicians who had experience treating patients with gastrointestinal malignancies, and two groups of healthy individuals. The patients with a colostomy and the physicians rated life with a colostomy significantly higher than did the other three groups. Another bias is the Empathy gap: humans fail to empathize or predict correctly how they will feel in the future, i.e. what kind of emotional state they will be in. Sometimes, we fail to take into account how much our psychological “immune system” will ameliorate reactions to negative events. People do not realize how they will rationalize negative outcomes once they occur (the Immune neglect). People also often mispredict regret (Gilbert et al., 2004b):
the top six biggest regrets in life center on (in descending order) education, career, romance, parenting, the self, and leisure. (…) people's biggest regrets are a reflection of where in life they see their largest opportunities; that is, where they see tangible prospects for change, growth, and renewal. (Roese & Summerville, 2005).
So a perfectly rational agent, at time t, would choose to do X at t+1 given what she expects her future valuation of X to be. As studies showed, however, we are bad predictors of our own future subjective appreciation. The person we are at t+1 may not totally agree with the person we were at t. So, in one sense, this gives a non-trivial meaning to internal irrationality: since our affective forecasting competence is biased, we may not always choose what we like or like what we choose. Hedonic psychology might have identified incoherence between intentions, actions and plans, an internal failure in our practical rationality.

Recommended reading:



References

  • Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., & Martin-Skurski, M. E. (2006). Neurobiological Substrates of Dread. Science, 312(5774), 754-758.
  • Boyd, N. F., Sutherland, H. J., Heasman, K. Z., Tritchler, D. L., & Cummings, B. J. (1990). Whose Utilities for Decision Analysis? Med Decis Making, 10(1), 58-67.
  • Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery Winners and Accident Victims: Is Happiness Relative? J Pers Soc Psychol, 36(8), 917-927.
  • Gilbert, D. T. (2006). Stumbling on Happiness (1st ed.). New York: A.A. Knopf.
  • Gilbert, D. T., & Ebert, J. E. J. (2002). Decisions and Revisions: The Affective Forecasting of Changeable Outcomes. Journal of Personality and Social Psychology, 82(4), 503–514.
  • Gilbert, D. T., Lieberman, M. D., Morewedge, C. K., & Wilson, T. D. (2004a). The Peculiar Longevity of Things Not So Bad. Psychological Science, 15(1), 14-19.
  • Gilbert, D. T., Morewedge, C. K., Risen, J. L., & Wilson, T. D. (2004b). Looking Forward to Looking Backward. The Misprediction of Regret. Psychological Science, 15(5), 346-350.
  • Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (1998). Immune Neglect: A Source of Durability Bias in Affective Forecasting. J Pers Soc Psychol, 75(3), 617-638.
  • Gilbert, D. T., & Wilson, T. D. (2000). Miswanting: Some Problems in the Forecasting of Future Affective States. Feeling and thinking: The role of affect in social cognition, 178–197.
  • Kermer, D. A., Driver-Linn, E., Wilson, T. D., & Gilbert, D. T. (2006). Loss Aversion Is an Affective Forecasting Error. Psychological Science, 17(8), 649-653.
  • Loomes, G., & Sugden, R. (1982). Regret Theory: An Alternative Theory of Rational Choice under Uncertainty. The Economic Journal, 92(368), 805-824.
  • Roese, N. J., & Summerville, A. (2005). What We Regret Most... And Why. Personality and Social Psychology Bulletin, 31(9), 1273.
  • Seidl, C. (2002). Preference Reversal. Journal of Economic Surveys, 16(5), 621-655.
  • Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics. Journal of Socio-Economics, 31(4), 329-342.
  • Srivastava, A., Locke, E. A., & Bartol, K. M. (2001). Money and Subjective Well-Being: It's Not the Money, It's the Motives. J Pers Soc Psychol, 80(6), 959-971.
  • Thaler, A., & Tversky, R. H. (1990). Anomalies: Preference Reversals. Journal of Economic Perspectives, 4, 201-211.
  • Wilson, T. D., & Gilbert, D. T. (2003). Affective Forecasting. Advances in experimental social psychology, 35, 345-411.



9/25/07

My brain has a politics of its own: neuropolitic musing on values and signal detection

Political psychology (just as politicians and voters) identifies two species of political values: left/right, or liberalism/conservatism. Reviewing many studies, Thornhill & Fincher (2007) summarizes the cognitive style of both ideologies:

Liberals tend to be: against, skeptical of, or cynical about familiar and traditional ideology; open to new experiences; individualistic and uncompromising, pursuing a place in the world on personal terms; private; disobedient, even rebellious rulebreakers; sensation seekers and pleasure seekers, including in the frequency and diversity of sexual experiences; socially and economically egalitarian; and risk prone; furthermore, they value diversity, imagination, intellectualism, logic, and scientific progress. Conservatives exhibit the reverse in all these domains. Moreover, the felt need for order, structure, closure, family and national security, salvation, sexual restraint, and self-control, in general, as well as the effort devoted to avoidance of change, novelty, unpredictability, ambiguity, and complexity, is a well-established characteristic of conservatives. (Thornhill & Fincher, 2007).
In their paper, Thornhill & Fincher presents an evolutionary hypothesis for explaining the liberalism/conservatism ideologies: both originate from innate adaptation to attachement, parametrized by early childhood experiences. In another but related domain Lakoff (2002) argued that liberals and conservatives differs in their methaphors: both view the nation or the State as a child, but they hold different perspectives on how to raise her: the Strict Father model (conservatives) or the Nurturant Parent model (liberals); see an extensive description here). The first one

posits a traditional nuclear family, with the father having primary responsibility for supporting and protecting the family as well as the authority to set overall policy, to set strict rules for the behavior of children, and to enforce the rules [where] [s]elf-discipline, self-reliance, and respect for legitimate authority are the crucial things that children must learn.


while in the second:

Love, empathy, and nurturance are primary, and children become responsible, self-disciplined and self-reliant through being cared for, respected, and caring for others, both in their family and in their community.
In the October issue of Nature Neuroscience, a new research paper by Amodio et al. study the "neurocognitive correlates of liberalism and conservatism". The study is more modest than the title suggests. Subject were submitted to the same test, a Go/No Go task (click when you see a "W" don't click when it's a "M"). The experimenters then trained the subjects to be used to the Go stimuli; on a few occasions, they were presented with the No Go stimuli. Since they got used to the Go stimuli, the presentation of a No Go creates a cognitive conflict: balancing the fast/automatic/ vs. the slow/deliberative processing. You have to inhibit an habit in order to focus on the goal when the habit goes in the wrong direction. The idea was to study the correlation between political values and conflict monitoring. The latter is partly mediated by the anterior cingulate cortex, a brain area widely studied in neuroeconomics and decision neuroscience (see this post). EEG recording indicated that liberals' neural response to conflict were stronger when response inhibition was required. Hence liberalism is associated to a greater sensibility to response conflict, while conservatism is associated with a greater persistence in the habitual pattern. These results, say the authors, are

consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanism related to cognitive control and self-regulation
Thus valuing tradition vs. novelty, security vs. novelty might have sensorimotor counterpart, or symptoms. Of course, it does not mean that the neural basis of conservatism is identified, or the "liberal area", etc, but this study suggest how micro-tasks may help to elucidate, as the authors say in the closing sentence, "how abstract, seemingly ineffable constructs, such as ideology, are reflected in the human brain."

What this study--together with other data on conservatives and liberal--might justify is the following hypothesis: what if conservatives and liberals are natural kinds? That is, "homeostatic property clusters", (see Boyd 1991, 1999), categories of "things" formed by nature (like water, mammals, etc.), not by definition? (like supralunar objects, non-cat, grue emerald, etc.) Things that share surface properties (political beliefs and behavior) whose co-occurence can be explained by underlying mechanims (neural processing of conflict monitoring)? Maybe our evolution, as social animals, required the interplay of tradition-oriented and novelty-oriented individuals, risk-prone and risk-averse agents. But why, in the first place, evolution did not select one type over another? Here is another completely armchair hypothesis: in order to distribute, in the social body, the signal detection problem.

What kind of errors would you rather do: a false positive (you identify a signal but it's only noise) or a false negative (you think it's noise but it's a signal)? A miss or a false alarm? That is the kind of problems modeled by signal detection theory (SDT): since there is always some noise and you try to detect signal, you cannot know in advance, under radical uncertainty, what kind of policy you should stick to (risk-averse or risk-prone. "Signal" and "noise" are generic information-theoretic terms that may be related to any situation where an agent tries to find if a stimuli is present:




Is is rather ironic that signal detection theorists employ the term liberal* and conservative* (the "*" means that I am talking of SDT, not politics) to refer to different biases or criterions in signal detection. A liberal* bias is more likely to set off a positive response ( increasing the probability of false positive), whereas a conservative* bias is more likely to set off a negative response (increasing the probability of false negative). The big problem in life is that in certain domains conservatism* pay, while in others it's liberalism* who does (see Proust 2006): when identifying danger, a false negative is more expensive (better safe than sorry) whereas in looking for food a false positive can be more expensive better (better satiated than exhausted). So a robust criterion is not adaptive; but how to adjust the criterion properly? If you are an individual agent, you must altern between liberal* and conservative* criterion based on your knowledge. But if you are part of a group, liberal* and conservative* biases may be distributed: certains individuals might be more liberals* (let's send them to stand and keep watch) and other more conservatives* (let's send them foraging). Collectively, it could be a good solution (if it is enforced by norms of cooperation) to perpetual uncertainty and danger. So if our species evolved with a distribution of signal detection criterions, then we should have evolved different cognitive styles and personality traits that deal differently with uncertainty: those who favor habits, traditions, security, and the others. If liberal* and conservative* criterions are applied to other domains such as family (an institution that existed before the State), you may end up with the Strict Father model and the Nurturant Parent model; when these models are applied to political decision-making, you may end up with liberals/conservatives (no "*"). That would give a new meaning to the idea that we are, by nature, political animals.


Related posts
Links
References




9/7/07

Why we need a neuroeconomic account of valuation

In my last post, I outlined an account of valuation. Whether mine is a good one is disputable, but the fact is that research in neuroeconomic, philosophy of mind, psychology, or any other field concerned with decision-making will sooner or later require a credible account of valuation, and value. Here is two reasons why.

First, there is a significant overlap between brain areas and processes in different valuations domains. While neuroeconomics, neuroethics and neuropolitics began to make explicit the neural mechanisms involved in these domains (Glimcher, 2003; Tancredi, 2005; Westen, 2007), attempts to cross-fertilize research are scarce. Research showed that economic, moral and political cognition involve similar brain processes. For instance, whether subjects play economic games (Rilling et al., 2002; Sanfey et al., 2003), reflect upon moral issues (Greene & Haidt, 2002; Koenigs et al., 2007) or make political judgments (Kaplan et al., 2007; Knutson et al., 2006; Westen et al., 2006), these tasks recruits principally the following evaluative mechanisms: Core affect, monitoring and control mechanisms described in my last post.

Second, even in one area--neuroeconomics--it is not clear what researchers mean when they talk about the "neural substrate of economic value". Take for instance two recent studies. Seo et al (2007) and Padoa-Schioppa (2007) both attempt to identify the brain valuation processes. The first one conclude that

A rich literature from lesion studies, functional imaging, and primate neurophysiology suggests that critical mechanisms for economic choice might take place in the orbitofrontal cortex. More specifically, recent results from single cell recordings in monkeys link OFC [Orbitofrontal Cortex] to the computation of economic value. We showed that the value representation in OFC reflects the subjective nature of economic value, and that neurons in this area encode value per se, independently of the visuo-motor contingencies of choice

The other discuss how the DLPFC contribute to decision-making:

individual neurons in the dorsolateral prefrontal cortex (DLPFC) encoded 3 different types of signals that can potentially influence the animal's future choices. First, activity modulated by the animal's previous choices might provide the eligibility trace that can be used to attribute a particular outcome to its causative action. Second, activity related to the animal's rewards in the previous trials might be used to compute an average reward rate. Finally, activity of some neurons was modulated by the computer's choices in the previous trials and may reflect the process of updating the value functions.
So how is something valuated? DLPFC or OFC ? How exactly they differ? Yes, one is about "reward" and the other "economic value", but again, since money can be rewarding and food can have an economic value (utility), it is not clear that different words refer to different processes. On top of that, there is also a huge literature on dopaminergic systems and valuation (see Montague et al, 2006; Montague, 2006) for a complete review). So some clarification is required here. I will try, in future post, do discuss these questions.


  • Glimcher, P. W. (2003). Decisions, uncertainty, and the brain : the science of neuroeconomics. Cambridge, Mass. ; London: MIT Press.
  • Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends Cogn Sci, 6(12), 517-523.
  • Kaplan, J. T., Freedman, J., & Iacoboni, M. (2007). Us versus them: Political attitudes and party affiliation influence neural response to faces of presidential candidates. Neuropsychologia, 45(1), 55-64.
  • Knutson, K. M., Wood, J. N., Spampinato, M. V., & Grafman, J. (2006). Politics on the Brain: An fMRI Investigation. Soc Neurosci, 1(1), 25-40.
  • Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908-911.
  • Montague, P. R., King-Casas, B., & Cohen, J. D. (2006). Imaging valuation models in human choice. Annu Rev Neurosci, 29, 417-448.
  • Montague, R. (2006). Why choose this book? : how we make decisions. New York: Penguin Group.
    Padoa-Schioppa, C. (2007). Orbitofrontal Cortex and the Computation of Economic Value. Ann NY Acad Sci, annals.1401.1011.
  • Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., & Kilts, C. (2002). A neural basis for social cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • Seo, H., Barraclough, D. J., & Lee, D. (2007). Dynamic Signals Related to Choices and Outcomes in the Dorsolateral Prefrontal Cortex. Cereb. Cortex, 17(suppl_1), i110-117
  • Tancredi, L. R. (2005). Hardwired behavior : what neuroscience reveals about morality. New York: Cambridge University Press.
  • Westen, D. (2007). The political brain : the role of emotion in deciding the fate of the nation. New York: PublicAffairs.
  • Westen, D., Blagov, P. S., Harenski, K., Kilts, C., & Hamann, S. (2006). Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election. J. Cogn. Neurosci., 18(11), 1947-1958.



A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.