Natural Rationality | decision-making in the economy of nature
Showing posts with label value. Show all posts
Showing posts with label value. Show all posts

11/20/07

A Temptative Definition of Morality

Usually, we recognize moral agents as social beings who tend to engage in behavior whose consequences may benefit others and who comply with rules that promote such consequences. Rescuing a drowning child counts as a morally good act, while firing a female employee because of her pregnancy counts as a morally wrong act. It is right/wrong not just because is has certain consequences, but also because it is coherent/incoherent with certain norms of goodness and wrongness that moral agents are expected to follow. For instance, we consider rape immoral not only because of its harmful consequences, but also because it profoundly violates the autonomy and personal rights of an individual and the “general moral prohibition against using other persons against their wills” (Goldman, 1977, p. 281). This normativity can be produced either by intuitive reactions or deliberate judgments. We may have visceral feelings about the goodness and wrongness of certain acts, even if cannot tell why we find it good or bad (a phenomena known as “moral dumbfounding” (Haidt & Hersh, 2001)). For others, more complex and less consensual questions, (e.g. euthanasia) we may have to rely to principles, doctrines, beliefs, codes or virtues. In every case, morality is a set of dispositions to make normatively assessable decisions and judgments (either intuitively or intellectually) about appropriate social behavior and values. This appropriateness, as Haidt showed, revolves usually around five important moral themes: harm and care, fairness and reciprocity, loyalty, authority and respect, purity and sanctity, each of which determines virtues and vices: kindness/cruelty, honesty/dishonesty, self-sacrifice/treason, obedience/disobedience, temperance/intemperance (Haidt, 2007).

Morality thus encompasses most, if not all, appropriateness standard. All human societies tend to approve of certain behaviors and promote moral codes through cultural, religious or legal means. For instance, most westerners do not see anything wrong with a widow eating fish; in certain places in India, however, this is considered as an immoral act (Shweder et al., 1997). A universal feature of human cognition is thus the moral attitude: “people expect others to act in certain ways and not in others, and they care about whether or not others are following these norms” (Haidt, in press). Thus, although the content of norms and the scope of the moral domains is, to a certain extent, culturally variable, the deontic attitudes of people around the world is a constant. So instead of a crisp definition of morality, maybe we need another kind of conceptual representation.

To represent the moral domain, I suggest that we imagine a two-dimensional space: a deontic dimension and a social dimension. The first one represents the different deontic attitude we can have about conduct: forbidden, permitted, obligatory; the other represent the number of agents that are the objects of the moral judgment, i.e., 1 (personal), 2 or more (interpersonal) or a great number (collective). Any moral judgment is a statement about the deontic status (ordinate) of an action (abscissae). But, you might reply, a statement like “you should not cross if the light is red” is deontic and social, yet it is not really moral? Well, I would say that it is. The whole society is, as Durkheim said, “une oeuvre morale”. We interact with each others from the “moral stance”; whether it is crossing the street or killing someone, each act has a moral (deontic x social) status.



However, it is clear that no definition of morality will be enough. As Nado et al. (to appear) observe, in every attempt to define morality or even to assert what such a definition would be (such as the essays in Wallace & Walker, 1970), no consensus was ever reached. So I don’t expect a consensus here, but hope that it could be a useful approach.

References.
  • Goldman, A. H. (1977). Plain Sex. Philosophy and Public Affairs, 6(3), 267-287.
  • Haidt, J. (2007). The New Synthesis in Moral Psychology. Science, 316(5827), 998-1002.
  • Haidt, J., & Hersh, M. A. (2001). Sexual Morality: The Cultures and Emotions of Conservatives and Liberals1. Journal of Applied Social Psychology, 31(1), 191-221.
  • Haidt, J., & Joseph, C. (in press). The Moral Mind: How 5 Sets of Innate Moral Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules. In P. Carruthers, S. Laurence & S. Stich (Eds.), The Innate Mind, Vol. 3.
  • Nado, J., Kelly, D., & Stich, S. (to appear). Moral Judgment. In J. Symons & P. Calvo (Eds.), Routledge Companion to the Philosophy of Psychology.
  • Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The" Big Three" Of Morality (Autonomy, Community, Divinity) and The" Big Three" Explanations of Suffering. In A. Brandt & P. Rozin (Eds.), Morality and Health (pp. 119-169). New York: Routledge.
  • Wallace, G., & Walker, A. D. M. (1970). The Definition of Morality. London,: Methuen.



10/10/07

Fairness and Schizophrenia in the Ultimatum

For the first time, a study look at schizophrenic patient behavior in the Ultimatum Game. Other studies of schizophrenic choice behavior revealed that they have difficulty in decisions under ambiguity and uncertainty (Lee et al, 2007), have a slight preference for immediate over long-term rewards, (Heerey et al, 2007), exhibit "strategic stiffness" (sticking to a strategy in sequential decision-making without integrating the outcomes of past choices; Kim et al, 2007), perform worse in the Iowa Gambling Task (Sevy et al. 2007)

A research team from Israel run a Ultimatum experiment with schizophrenic subjects (plus two control group, one depressive, one non-clinical). They had to split 20 New Israeli Shekels (NIS) (about 5 US$). Although schizophrenic patients' Responder behavior was not different from control group, their Proposer behavior was different: they tended to be less strategic.

With respect to offer level, results fall into three categories, fair (10 NIS), unfair (less than 10 NIS), and hyper-fair (more than 10 NIS). Schizophrenic patients tended to make less 'unfair' offer, and more 'hyper-fair' offer. Men were more generous than women.

According to the authors,

for schizophrenic Proposers, the possibility of dividing the money evenly was as reasonable as for healthy Proposers, whereas the option of being hyper-fair appears to be as reasonable as being unfair, in contrast to the pattern for healthy Proposers.
Agay et al. also studied the distribution of Proposers types according to their pattern of sequential decisions (how their second offer compared to their first). They identified three types:
  1. "‘Strong-strategic’ Proposers are those who adjusted their 2nd offer according to the response to their 1st offer, that is, raised their 2nd offer after their 1st one was rejected, or lowered their 2nd offer after their 1st offer was accepted.
  2. Weak-strategic’ Proposers are those who perseverated, that is, their 2nd offer was the same as their 1st offer.
  3. Finally, ‘non-strategic’ Proposers are those who unreasonably reduced their offer after a rejection, or raised their offer after an acceptance."
20% of the schizoprenic group are non-strategic, while none of the healthy subjects are non-strategic.


the highest proportion of non-strategic Proposers is in the schizophrenic group
The authors do not offer much explication for these results:

In the present framework, schizophrenic patients seemed to deal with the cognition-emotion conflict described in the fMRI study of Sanfey et al. (2003) [NOTE: the authors of the first neuroeconomics Ultimatum study] in a manner similar to that of healthy controls. However, it is important to note that the low proportion of rejections throughout the whole experiment makes this conclusion questionable.
Another study, however, shows that "siblings of patients with schizophrenia rejected unfair offers more often compared to control participants." (van ’t Wout et al, 2006, chap. 12), thus suggesting that Responder behavior might be, after all, different in patient with a genetic liability to schizophrenia. Yet another unresolved issue !

Related Posts

Reference
  • Agay, N., Kron, S., Carmel, Z., Mendlovic, S., & Levkovitz, Y. Ultimatum bargaining behavior of people affected by schizophrenia. Psychiatry Research, In Press, Corrected Proof.
  • Hamann, J., Cohen, R., Leucht, S., Busch, R., & Kissling, W. (2007). Shared decision making and long-term outcome in schizophrenia treatment. The Journal of clinical psychiatry, 68(7), 992-7.
  • Heerey, E. A., Robinson, B. M., McMahon, R. P., & Gold, J. M. (2007). Delay discounting in schizophrenia. Cognitive neuropsychiatry, 12(3), 213-21.
  • Hyojin Kim, Daeyeol Lee, Shin, Y., & Jeanyung Chey. (2007). Impaired strategic decision making in schizophrenia. Brain Res.
  • Lee, Y., Kim, Y., Seo, E., Park, O., Jeong, S., Kim, S. H., et al. (2007). Dissociation of emotional decision-making from cognitive decision-making in chronic schizophrenia. Psychiatry research, 152(2-3), 113-20.
  • Mascha van ’t Wout, Ahmet Akdeniz, Rene S. Kahn, Andre Aleman. Vulnerability for schizophrenia and goal-directed behavior: the Ultimatum Game in relatives of patients with schizophrenia. (manuscript), from The nature of emotional abnormalities in schizophrenia: Evidence from patients and high-risk individuals / Mascha van 't Wout, 2006, Proefschrift Universiteit Utrecht.
  • McKay, R., Langdon, R., & Coltheart, M. (2007). Jumping to delusions? Paranoia, probabilistic reasoning, and need for closure. Cognitive neuropsychiatry, 12(4), 362-76.
  • Sevy, S., Burdick, K. E., Visweswaraiah, H., Abdelmessih, S., Lukin, M., Yechiam, E., et al. (2007). Iowa Gambling Task in schizophrenia: A review and new data in patients with schizophrenia and co-occurring cannabis use disorders. Schizophrenia Research, 92(1-3), 74-84.



10/9/07

Recent neuroeconomics/neuroethics studies

First, in Cerebral Cortex, a lesion study suggesting that the Ventromedial Prefrontal Cortex (VMF) in involved both in decisions under uncertainty and those that are not; moreover, "Subjects with VMF damage were significantly more inconsistent in their preferences than controls, whereas those with frontal damage that spared the VMF performed normally". In:

The Role of Ventromedial Prefrontal Cortex in Decision Making: Judgment under Uncertainty or Judgment Per Se? Fellows, L. K., & Farah, M. J. (2007). Cerebral Cortex, 17(11), 2669-2674.

Second, a study on norm compliance: subjects are fairer in Dictator games when third-party punishment is possible; Spitzer et al. identified the brain areas involved in this norm-compliant behavior (the lateral orbitofrontal cortex--correlated with Machiavellian personality characteristics--and right dorsolateral prefrontal cortex). In:

The Neural Signature of Social Norm Compliance Manfred Spitzer, Urs Fischbacher, Bärbel Herrnberger, Georg Grön and Ernst Fehr
Volume 56, Issue 1, 4 October 2007, Pages 185-196

See a review in ScienceNOW, and a mini-review on norm violation with a neuro-computational twist:

To Detect and Correct: Norm Violations and Their Enforcement, P. Read Montague and Terry Lohrenz, Neuron, Volume 56, Issue 1, 4 October 2007, Pages 14-18.


Related to that, a suggested reading:

Sripada & Stich analyse the structure of norms in social context and how it relate to cognitive processing.



10/4/07

Social Neuroeconomics: A Review by Fehr and Camerer

Ernst Fehr and Colin Camerer, two prominent experimental/behavioral/neuro-economists published a new paper in Trends in Cognitive Science on social neuroeconomics. Discussing many studies (this paper is a state-of-the-art review), they conclude that

social reward activates circuitry that overlaps, to a surprising degree, with circuitry that anticipates and represents other types of rewards. These studies reinforce the idea that social preferences for donating money, rejecting unfair offers, trusting others and punishing those who violate norms, are genuine expressions of preference

The authors illustrate this overlap with a the following picture: social and non-social reward elicit similar neural activation (see references for all cited studies at the end of this post):



Figure 1. (from Fehr and Camerer, forthcoming). Parallelism of rewards for oneself and for others: Brain areas commonly activated in (a) nine studies of social reward (..), and (b) a sample of six studies of learning and anticipated own monetary reward (..).

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents's strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.


Related posts:

Reference:
  • Fehr, E. and Camerer, C.F., Social neuroeconomics: the neural circuitry of social preferences, Trends Cogn. Sci. (2007), doi:10.1016/j.tics.2007.09.002


Studies of social reward cited in Fig. 1:

  • [26] J. Rilling et al., A neural basis for social cooperation, Neuron 35 (2002), pp. 395–405.
  • [27] J.K. Rilling et al., Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways, Neuroreport 15 (2004), pp. 2539–2543.
  • [28] D.J. de Quervain et al., The neural basis of altruistic punishment, Science 305 (2004), pp. 1254–1258.
  • [29] T. Singer et al., Empathic neural responses are modulated by the perceived fairness of others, Nature 439 (2006), pp. 466–469
  • [30] J. Moll et al., Human fronto-mesolimbic networks guide decisions about charitable donation, Proc. Natl. Acad. Sci. U. S. A. 103 (2006), pp. 15623–15628.
  • [31] W.T. Harbaugh et al., Neural responses to taxation and voluntary giving reveal motives for charitable donations, Science 316 (2007), pp. 1622–1625.
  • [32] Tabibnia, G. et al. The sunny side of fairness – preference for fairness activates reward circuitry. Psychol. Sci. (in press).
  • [55] T. Singer et al., Brain responses to the acquired moral status of faces, Neuron 41 (2004), pp. 653–662.
  • [56] B. King-Casas et al., Getting to know you: reputation and trust in a two-person economic exchange, Science 308 (2005), pp. 78–83.

Studies of learning and anticipated own monetary reward cited in Fig. 1:

  • [33] S.M. Tom et al., The neural basis of loss aversion in decision-making under risk, Science 315 (2007), pp. 515–518.
  • [61] M. Bhatt and C.F. Camerer, Self-referential thinking and equilibrium as states of mind in games: fMRI evidence, Games Econ. Behav. 52 (2005), pp. 424–459.
  • [73] P.K. Preuschoff et al., Neural differentiation of expected reward and risk in human subcortical structures, Neuron 51 (2006), pp. 381–390.
  • [74] J. O’Doherty et al., Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science 304 (2004), pp. 452–454.
  • [75] E.M. Tricomi et al., Modulation of caudate activity by action contingency, Neuron 41 (2004), pp. 281–292.



9/25/07

My brain has a politics of its own: neuropolitic musing on values and signal detection

Political psychology (just as politicians and voters) identifies two species of political values: left/right, or liberalism/conservatism. Reviewing many studies, Thornhill & Fincher (2007) summarizes the cognitive style of both ideologies:

Liberals tend to be: against, skeptical of, or cynical about familiar and traditional ideology; open to new experiences; individualistic and uncompromising, pursuing a place in the world on personal terms; private; disobedient, even rebellious rulebreakers; sensation seekers and pleasure seekers, including in the frequency and diversity of sexual experiences; socially and economically egalitarian; and risk prone; furthermore, they value diversity, imagination, intellectualism, logic, and scientific progress. Conservatives exhibit the reverse in all these domains. Moreover, the felt need for order, structure, closure, family and national security, salvation, sexual restraint, and self-control, in general, as well as the effort devoted to avoidance of change, novelty, unpredictability, ambiguity, and complexity, is a well-established characteristic of conservatives. (Thornhill & Fincher, 2007).
In their paper, Thornhill & Fincher presents an evolutionary hypothesis for explaining the liberalism/conservatism ideologies: both originate from innate adaptation to attachement, parametrized by early childhood experiences. In another but related domain Lakoff (2002) argued that liberals and conservatives differs in their methaphors: both view the nation or the State as a child, but they hold different perspectives on how to raise her: the Strict Father model (conservatives) or the Nurturant Parent model (liberals); see an extensive description here). The first one

posits a traditional nuclear family, with the father having primary responsibility for supporting and protecting the family as well as the authority to set overall policy, to set strict rules for the behavior of children, and to enforce the rules [where] [s]elf-discipline, self-reliance, and respect for legitimate authority are the crucial things that children must learn.


while in the second:

Love, empathy, and nurturance are primary, and children become responsible, self-disciplined and self-reliant through being cared for, respected, and caring for others, both in their family and in their community.
In the October issue of Nature Neuroscience, a new research paper by Amodio et al. study the "neurocognitive correlates of liberalism and conservatism". The study is more modest than the title suggests. Subject were submitted to the same test, a Go/No Go task (click when you see a "W" don't click when it's a "M"). The experimenters then trained the subjects to be used to the Go stimuli; on a few occasions, they were presented with the No Go stimuli. Since they got used to the Go stimuli, the presentation of a No Go creates a cognitive conflict: balancing the fast/automatic/ vs. the slow/deliberative processing. You have to inhibit an habit in order to focus on the goal when the habit goes in the wrong direction. The idea was to study the correlation between political values and conflict monitoring. The latter is partly mediated by the anterior cingulate cortex, a brain area widely studied in neuroeconomics and decision neuroscience (see this post). EEG recording indicated that liberals' neural response to conflict were stronger when response inhibition was required. Hence liberalism is associated to a greater sensibility to response conflict, while conservatism is associated with a greater persistence in the habitual pattern. These results, say the authors, are

consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanism related to cognitive control and self-regulation
Thus valuing tradition vs. novelty, security vs. novelty might have sensorimotor counterpart, or symptoms. Of course, it does not mean that the neural basis of conservatism is identified, or the "liberal area", etc, but this study suggest how micro-tasks may help to elucidate, as the authors say in the closing sentence, "how abstract, seemingly ineffable constructs, such as ideology, are reflected in the human brain."

What this study--together with other data on conservatives and liberal--might justify is the following hypothesis: what if conservatives and liberals are natural kinds? That is, "homeostatic property clusters", (see Boyd 1991, 1999), categories of "things" formed by nature (like water, mammals, etc.), not by definition? (like supralunar objects, non-cat, grue emerald, etc.) Things that share surface properties (political beliefs and behavior) whose co-occurence can be explained by underlying mechanims (neural processing of conflict monitoring)? Maybe our evolution, as social animals, required the interplay of tradition-oriented and novelty-oriented individuals, risk-prone and risk-averse agents. But why, in the first place, evolution did not select one type over another? Here is another completely armchair hypothesis: in order to distribute, in the social body, the signal detection problem.

What kind of errors would you rather do: a false positive (you identify a signal but it's only noise) or a false negative (you think it's noise but it's a signal)? A miss or a false alarm? That is the kind of problems modeled by signal detection theory (SDT): since there is always some noise and you try to detect signal, you cannot know in advance, under radical uncertainty, what kind of policy you should stick to (risk-averse or risk-prone. "Signal" and "noise" are generic information-theoretic terms that may be related to any situation where an agent tries to find if a stimuli is present:




Is is rather ironic that signal detection theorists employ the term liberal* and conservative* (the "*" means that I am talking of SDT, not politics) to refer to different biases or criterions in signal detection. A liberal* bias is more likely to set off a positive response ( increasing the probability of false positive), whereas a conservative* bias is more likely to set off a negative response (increasing the probability of false negative). The big problem in life is that in certain domains conservatism* pay, while in others it's liberalism* who does (see Proust 2006): when identifying danger, a false negative is more expensive (better safe than sorry) whereas in looking for food a false positive can be more expensive better (better satiated than exhausted). So a robust criterion is not adaptive; but how to adjust the criterion properly? If you are an individual agent, you must altern between liberal* and conservative* criterion based on your knowledge. But if you are part of a group, liberal* and conservative* biases may be distributed: certains individuals might be more liberals* (let's send them to stand and keep watch) and other more conservatives* (let's send them foraging). Collectively, it could be a good solution (if it is enforced by norms of cooperation) to perpetual uncertainty and danger. So if our species evolved with a distribution of signal detection criterions, then we should have evolved different cognitive styles and personality traits that deal differently with uncertainty: those who favor habits, traditions, security, and the others. If liberal* and conservative* criterions are applied to other domains such as family (an institution that existed before the State), you may end up with the Strict Father model and the Nurturant Parent model; when these models are applied to political decision-making, you may end up with liberals/conservatives (no "*"). That would give a new meaning to the idea that we are, by nature, political animals.


Related posts
Links
References




9/7/07

Why we need a neuroeconomic account of valuation

In my last post, I outlined an account of valuation. Whether mine is a good one is disputable, but the fact is that research in neuroeconomic, philosophy of mind, psychology, or any other field concerned with decision-making will sooner or later require a credible account of valuation, and value. Here is two reasons why.

First, there is a significant overlap between brain areas and processes in different valuations domains. While neuroeconomics, neuroethics and neuropolitics began to make explicit the neural mechanisms involved in these domains (Glimcher, 2003; Tancredi, 2005; Westen, 2007), attempts to cross-fertilize research are scarce. Research showed that economic, moral and political cognition involve similar brain processes. For instance, whether subjects play economic games (Rilling et al., 2002; Sanfey et al., 2003), reflect upon moral issues (Greene & Haidt, 2002; Koenigs et al., 2007) or make political judgments (Kaplan et al., 2007; Knutson et al., 2006; Westen et al., 2006), these tasks recruits principally the following evaluative mechanisms: Core affect, monitoring and control mechanisms described in my last post.

Second, even in one area--neuroeconomics--it is not clear what researchers mean when they talk about the "neural substrate of economic value". Take for instance two recent studies. Seo et al (2007) and Padoa-Schioppa (2007) both attempt to identify the brain valuation processes. The first one conclude that

A rich literature from lesion studies, functional imaging, and primate neurophysiology suggests that critical mechanisms for economic choice might take place in the orbitofrontal cortex. More specifically, recent results from single cell recordings in monkeys link OFC [Orbitofrontal Cortex] to the computation of economic value. We showed that the value representation in OFC reflects the subjective nature of economic value, and that neurons in this area encode value per se, independently of the visuo-motor contingencies of choice

The other discuss how the DLPFC contribute to decision-making:

individual neurons in the dorsolateral prefrontal cortex (DLPFC) encoded 3 different types of signals that can potentially influence the animal's future choices. First, activity modulated by the animal's previous choices might provide the eligibility trace that can be used to attribute a particular outcome to its causative action. Second, activity related to the animal's rewards in the previous trials might be used to compute an average reward rate. Finally, activity of some neurons was modulated by the computer's choices in the previous trials and may reflect the process of updating the value functions.
So how is something valuated? DLPFC or OFC ? How exactly they differ? Yes, one is about "reward" and the other "economic value", but again, since money can be rewarding and food can have an economic value (utility), it is not clear that different words refer to different processes. On top of that, there is also a huge literature on dopaminergic systems and valuation (see Montague et al, 2006; Montague, 2006) for a complete review). So some clarification is required here. I will try, in future post, do discuss these questions.


  • Glimcher, P. W. (2003). Decisions, uncertainty, and the brain : the science of neuroeconomics. Cambridge, Mass. ; London: MIT Press.
  • Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends Cogn Sci, 6(12), 517-523.
  • Kaplan, J. T., Freedman, J., & Iacoboni, M. (2007). Us versus them: Political attitudes and party affiliation influence neural response to faces of presidential candidates. Neuropsychologia, 45(1), 55-64.
  • Knutson, K. M., Wood, J. N., Spampinato, M. V., & Grafman, J. (2006). Politics on the Brain: An fMRI Investigation. Soc Neurosci, 1(1), 25-40.
  • Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908-911.
  • Montague, P. R., King-Casas, B., & Cohen, J. D. (2006). Imaging valuation models in human choice. Annu Rev Neurosci, 29, 417-448.
  • Montague, R. (2006). Why choose this book? : how we make decisions. New York: Penguin Group.
    Padoa-Schioppa, C. (2007). Orbitofrontal Cortex and the Computation of Economic Value. Ann NY Acad Sci, annals.1401.1011.
  • Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., & Kilts, C. (2002). A neural basis for social cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • Seo, H., Barraclough, D. J., & Lee, D. (2007). Dynamic Signals Related to Choices and Outcomes in the Dorsolateral Prefrontal Cortex. Cereb. Cortex, 17(suppl_1), i110-117
  • Tancredi, L. R. (2005). Hardwired behavior : what neuroscience reveals about morality. New York: Cambridge University Press.
  • Westen, D. (2007). The political brain : the role of emotion in deciding the fate of the nation. New York: PublicAffairs.
  • Westen, D., Blagov, P. S., Harenski, K., Kilts, C., & Hamann, S. (2006). Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election. J. Cogn. Neurosci., 18(11), 1947-1958.



A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.





6/29/07

Fruit Flies Neuroeconomics and ecological rationality.

In the last edition of Science. Zhang et al. examine the decision-making mechanisms of fruit flies (Drosophila). They used mutant flies to see if dopaminergic systems are necessary for saliency, or value-based, decision-making. Value-based decision-making is contrasted with perceptual, or simple, decision-making. In the former, decision in made only by integrating sensory cues: in the latter, some value is added to available options. Values are especially important when there are conflicting evidence of ambiguous stimuli. 


As many researches show, dopaminergic (DA) systems are an important--maybe the most important--valuation mechanism. They link stimuli to expected value. Flies, bees, monkeys and humans all rely on DA neuromodulation to make decisions. Fruit flies provides an interesting opportunity for neuroeconomics: it allows scientists to create genetic mutants (in this case, flies whose DA system shuts off over 30° C.) and analyse their behavior. Zhang and its collaborators discovered that flies without DA activity are able to make decision when they face a well-known situation, where it is easy to choose, but are inefficient when they face conflicting stimuli. Hence, when valuation is needed, DA is required: no DA, no valuation, no value-based decision-making. 

 Conceptually, the study shows how the concept of value-based and perceptual decision-making can be separated. The 'simple heuristics' and ecological rationality program made a strong case for simple decision-making: you just "see" the best option, e.g. which city is bigger:  Munich or Dortmund? Since the size of a city is correlated with its exposition in media, it is easy to answer by using a simple heuristics (choose the most familiar). In this case, there is no need to evaluate options. But when you have to choose where you want to live, values are important. In this case you heed preference ranking, and preferences seems inherently tied to DA activity.  


 

Reference :

Zhang, K., Guo, J. Z., Peng, Y., Xi, W., & Guo, A. (2007). Dopamine-Mushroom Body Circuit Regulates Saliency-Based Decision-Making in Drosophila. Science, 316(5833), 1901-1904.



4/22/07

marginal utility, value and the brain

Economics assumes the principle of diminishing marginal utility, i.e. the utility of a good increases more and more slowly as the quantity consumed increases (Wikipedia). Mathematically, it means that the value of a monetary gain is not a linear function of the monetary value. Before Bernouilli St-Petersburg Paradox (1738]1954), the expected value of a possible gamble was construed as the product of the objective (for instance, monetary) value of its outcomes and its probability. Suppose, then, a gambler is offered the following lottery:

A fair coin is tossed. If the outcome is heads, the lottery ends and you win 2$. If the outcome is tail, toss the coin again. It the outcome is heads, the lottery ends and you win 4$, etc. If the nth outcome is heads, you win 2n.

Summing the products of probability and value leads to an infinite expected value:

(0.5 x 2) + (0.25 x 4) + (0.125 x 8)…. =
1+1+1 …

After 30 tosses, the gambler could win more than 1 billion $. How much would it be worth paying for a ticket? If a rational agent maximizes expected value, he or she must be willing to buy a ticket for this lottery at any finite price, considering that the expected value of this prospect if infinite. But, as Hacking pointed out, “few of us would pay even $25 to enter such a game” (Hacking, 1980). When Bernoulli offered scholars in St-Petersburg to play this lottery, nobody was interested in it. Bernoulli concluded that the utility function is not linear, but logarithmic. Hence the subjective value of 10$ is different, depending whether you are Bill Gates or a homeless. Bernoulli’s discussion of the St-Petersburg paradox is often considered as one of the first economic experiment (Roth, 1993, p. 3).

A new study in neuroeconomics (Tobler et al.) indicates that the brain's valuation mechanisms follow this principle. Subjects in the experiments had to learn whether a particular abstract shape--shown on a computer screen--predicts a monetary reward (a picture of a 20 pence coin) or not (scrambled picture of the coin). If the utility of money has a diminishing marginal value, then money should be more important for poorer people than for richer. "More important" meaning that the former would learn reward prediction partterns faster and would display more activity in reward-related area. Bingo! That's exactly what happened. Midbain dopaminergic regions were more solicited in the poorer. The valuation mechanisms obey diminishing marginal utility.

This suggest that midbain dopaminergic systems (about which I blogged earlier; see also references at the end of this post) are the seat of our natural rationality, or at least one of its major component. These systems compute utility, stimulate motivation and attention, send reward-prediction error signals, learn from these signals and devise behavioral policies. They do not encode anticipated or experienced utility (other zones are recruited for these: the amygdala and nucleus accumbens for experienced utility, the OFC for anticipated utility, etc.), but decision utility, the cost/benefits analysis of a possible decision.


References

  • Bernoulli, D. (1738]1954). Exposition of a new theory on the measurement of risk. Econometrica, 22, 23-36.
  • Hacking, I. (1980). Strange expectations. Philosophy of Science, 47, 562-567.
  • Roth, A. E. (1993). On the early history of experimental economics. Journal of the History of Economic Thought, 15, 184-209.
  • Tobler, P. N., Fletcher, P. C., Bullmore, E. T., & Schultz, W. (2007). Learning-related human brain activations reflecting individual finances. Neuron, 54(1), 167-175.
On dopaminergic systems:
  • Ahmed, S. H. (2004). Neuroscience. Addiction as compulsive reward prediction. Science, 306(5703), 1901-1902.
  • Bayer, H. M., & Glimcher, P. W. (2005). Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47(1), 129.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Berridge, K. C., & Robinson, T. E. (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Res Brain Res Rev, 28(3), 309-369.
  • Cohen, J. D., & Blum, K. I. (2002). Reward and decision. Neuron, 36(2), 193-198.
  • Daw, N. D., & Doya, K. (2006). The computational neurobiology of learning and reward. Curr Opin Neurobiol, 16(2), 199-204.
  • Daw, N. D., & Touretzky, D. S. (2002). Long-term reward prediction in td models of the dopamine system. Neural Comput, 14(11), 2567-2583.
  • Dayan, P., & Balleine, B. W. (2002). Reward, motivation, and reinforcement learning. Neuron, 36(2), 285-298.
  • Di Chiara, G., & Bassareo, V. (2007). Reward system and addiction: What dopamine does and doesn't do. Curr Opin Pharmacol, 7(1), 69-76.
  • Egelman, D. M., Person, C., & Montague, P. R. (1998). A computational role for dopamine delivery in human decision-making. J Cogn Neurosci, 10(5), 623-630.
  • Floresco, S. B., & Magyar, O. (2006). Mesocortical dopamine modulation of executive functions: Beyond working memory. Psychopharmacology (Berl), 188(4), 567-585.
  • Frank, M. J., Seeberger, L. C., & O'Reilly, R. C. (2004). By carrot or by stick: Cognitive reinforcement learning in parkinsonism. Science, 306(5703), 1940-1943.
  • Joel, D., Niv, Y., & Ruppin, E. (2002). Actor-critic models of the basal ganglia: New anatomical and computational perspectives. Neural Netw, 15(4-6), 535-547.
  • Kakade, S., & Dayan, P. (2002). Dopamine: Generalization and bonuses. Neural Netw, 15(4-6), 549-559.
  • McCoy, A. N., & Platt, M. L. (2004). Expectations and outcomes: Decision-making in the primate brain. J Comp Physiol A Neuroethol Sens Neural Behav Physiol.
  • Montague, P. R., Hyman, S. E., & Cohen, J. D. (2004). Computational roles for dopamine in behavioural control. Nature, 431(7010), 760.
  • Morris, G., Nevet, A., Arkadir, D., Vaadia, E., & Bergman, H. (2006). Midbrain dopamine neurons encode decisions for future action. Nat Neurosci, 9(8), 1057-1063.
  • Nakahara, H., Itoh, H., Kawagoe, R., Takikawa, Y., & Hikosaka, O. (2004). Dopamine neurons can represent context-dependent prediction error. Neuron, 41(2), 269-280.
  • Nieoullon, A. (2002). Dopamine and the regulation of cognition and attention. Progress in Neurobiology, 67(1), 53.
  • Niv, Y., Daw, N. D., & Dayan, P. (2006). Choice values. Nat Neurosci, 9(8), 987-988.
  • Niv, Y., Duff, M. O., & Dayan, P. (2005). Dopamine, uncertainty and td learning. Behav Brain Funct, 1, 6.
  • Redish, A. D. (2004). Addiction as a computational process gone awry. Science, 306(5703), 1944-1947.
  • Schultz, W. (1999). The reward signal of midbrain dopamine neurons. News Physiol Sci, 14(6), 249-255.
  • Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593-1599.
  • Schultz, W., & Dickinson, A. (2000). Neuronal coding of prediction errors. Annu Rev Neurosci, 23, 473-500.
  • Self, D. (2003). Neurobiology: Dopamine as chicken and egg. Nature, 422(6932), 573-574.
  • Suri, R. E. (2002). Td models of reward predictive responses in dopamine neurons. Neural Netw, 15(4-6), 523-533.
  • Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315(5811), 515-518.
  • Ungless, M. A. (2004). Dopamine: The salient issue. Trends Neurosci, 27(12), 706.