Natural Rationality | decision-making in the economy of nature
Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

1/16/08

New entry of the Stanford Encyclopedia of Philosophy

A new entry might interest decision-making specialists, ethicists and philosophers:

Decision-Making Capacity

In many Western jurisdictions, the law presumes that adult persons, and sometimes children that meet certain criteria, are capable of making their own health care decisions; for example, consenting to a particular medical treatment, or consenting to participate in a research trial. But what exactly does it mean to say that a subject has or lacks the requisite capacity to decide? This last question has to do with what is commonly called “decisional capacity,” a central concept in health care law and ethics, and increasingly an independent topic of philosophical inquiry.

Decisional capacity can be defined as the ability of health care subjects to make their own health care decisions. Questions of ‘capacity’ sometimes extend to other contexts, such as capacity to stand trial in a court of law, and the ability to make decisions that relate to personal care and finances. However, for the purposes of this discussion, the notion of decisional capacity will be limited to health care contexts only; most notably, those where decisions to consent to or refuse treatment are concerned.

The combined theoretical and practical nature of decisional capacity in the area of consent is probably one of the things that makes it so intellectually compelling to philosophers who write about it. But this is still largely uncultivated philosophical territory. One reason is the highly interdisciplinary and rapidly changing nature of the field. Clinical methods and tests to assess capacity are proliferating. The law is also increasingly being called upon to respond to these clinical developments. All of this makes for a very eclectic and challenging field of inquiry. Philosophers must tread carefully if their contributions are to be timely and relevant.





9/7/07

A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.





3/22/07

Neuroscience and moral decision-making - 2 new studies

First, a great review in Nature Reviews Neuroscience on the neural basis of punishment. Their conclusion is that the so-called 'Strong Reciprocity' explanation of altruistic punishment, the idea put forth by behavioral economist to explain why subjects tend to punish 'cheaters' in experimental games, event at their own cost, may not be that hardwired.

there is no reason to assume that altruistic punishment should necessarily be hard-wired as an inherited intrinsic motivational goal (that is, as an unconditioned appetitive stimulus) in the same manner as primary rewards. However, neither does it exclude the possibility. Future research might help to resolve both the role of learning and early development in the acquisition of altruistic behaviour.
Second, a new Nature paper from Hauser, Adolphs, Damasio and other collaborators on the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning. This area is particularly important in the processing of emotional information and in anticipating emotional event. People with damages VMPFC tend to lose risk-aversion, because they lose the ability to anticipate negative feeling, and thus they engage in behaviors detrimental to their own well-being (loss of money, friends, family, social status, see Damasio's Descartes' Error), even if their intellectual faculty are perfectly normal. They also lose, according to this new study, the ability to use these emotional predictions for moral judgment. Normal subjects usually have a deontological conception of morality (i.e., there are moral principles that applies come what may) in certain cases, such as as strangling a crying baby in order to prevent it from revealing your position while an enemy troops wander in your village, with the instruction to kill all civilians. Normal subjects have a strong emotional response that leads them to refuse. VMPFC-imparied subjects, however tend to rely a little more on a 'utilitarian' scheme: since they lack the emotional appraisal of the situation, they use a cost-benefit reasoning. Thus, once again, neuroscience shows how emotions are important for moral and reasoning. The important question, of course, is how these studies can help us to understand how we should behave. Interesting question for ethical neurophilosophy...