Natural Rationality | decision-making in the economy of nature
Showing posts with label ACC. Show all posts
Showing posts with label ACC. Show all posts

10/12/07

Self-control is a Scarce Resource

We all have a limited quantity of energy to allow to self-control:

In a recent study, Michael Inzlicht of the University of Toronto Scarborough and colleague Jennifer N. Gutsell offer an account of what is happening in the brain when our vices get the better of us.

Inzlicht and Gutsell asked participants to suppress their emotions while watching an upsetting movie. The idea was to deplete their resources for self-control. The participants reported their ability to suppress their feelings on a scale from one to nine. Then, they completed a Stroop task, which involves naming the color of printed words (i.e. saying red when reading the word “green” in red font), yet another task that requires a significant amount of self-control.

The researchers found that those who suppressed their emotions performed worse on the Stroop task, indicating that they had used up their resources for self-control while holding back their tears during the film.

An EEG, performed during the Stroop task, confirmed these results. Normally, when a person deviates from their goals (in this case, wanting to read the word, not the color of the font), increased brain activity occurs in a part of the frontal lobe called the anterior cingulate cortex, which alerts the person that they are off-track. The researchers found weaker activity occurring in this brain region during the Stroop task in those who had suppressed their feelings. In other words, after engaging in one act of self-control this brain system seems to fail during the next act.
http://www.psychologicalscience.org/media/releases/2007/inzlicht.cfm
(via Cognews)
  • Inzlicht, M., & Gutsell, J. N. (in press). Running on empty: Neural signals for self-control failure. Psychological Science. (preprint)




9/25/07

My brain has a politics of its own: neuropolitic musing on values and signal detection

Political psychology (just as politicians and voters) identifies two species of political values: left/right, or liberalism/conservatism. Reviewing many studies, Thornhill & Fincher (2007) summarizes the cognitive style of both ideologies:

Liberals tend to be: against, skeptical of, or cynical about familiar and traditional ideology; open to new experiences; individualistic and uncompromising, pursuing a place in the world on personal terms; private; disobedient, even rebellious rulebreakers; sensation seekers and pleasure seekers, including in the frequency and diversity of sexual experiences; socially and economically egalitarian; and risk prone; furthermore, they value diversity, imagination, intellectualism, logic, and scientific progress. Conservatives exhibit the reverse in all these domains. Moreover, the felt need for order, structure, closure, family and national security, salvation, sexual restraint, and self-control, in general, as well as the effort devoted to avoidance of change, novelty, unpredictability, ambiguity, and complexity, is a well-established characteristic of conservatives. (Thornhill & Fincher, 2007).
In their paper, Thornhill & Fincher presents an evolutionary hypothesis for explaining the liberalism/conservatism ideologies: both originate from innate adaptation to attachement, parametrized by early childhood experiences. In another but related domain Lakoff (2002) argued that liberals and conservatives differs in their methaphors: both view the nation or the State as a child, but they hold different perspectives on how to raise her: the Strict Father model (conservatives) or the Nurturant Parent model (liberals); see an extensive description here). The first one

posits a traditional nuclear family, with the father having primary responsibility for supporting and protecting the family as well as the authority to set overall policy, to set strict rules for the behavior of children, and to enforce the rules [where] [s]elf-discipline, self-reliance, and respect for legitimate authority are the crucial things that children must learn.


while in the second:

Love, empathy, and nurturance are primary, and children become responsible, self-disciplined and self-reliant through being cared for, respected, and caring for others, both in their family and in their community.
In the October issue of Nature Neuroscience, a new research paper by Amodio et al. study the "neurocognitive correlates of liberalism and conservatism". The study is more modest than the title suggests. Subject were submitted to the same test, a Go/No Go task (click when you see a "W" don't click when it's a "M"). The experimenters then trained the subjects to be used to the Go stimuli; on a few occasions, they were presented with the No Go stimuli. Since they got used to the Go stimuli, the presentation of a No Go creates a cognitive conflict: balancing the fast/automatic/ vs. the slow/deliberative processing. You have to inhibit an habit in order to focus on the goal when the habit goes in the wrong direction. The idea was to study the correlation between political values and conflict monitoring. The latter is partly mediated by the anterior cingulate cortex, a brain area widely studied in neuroeconomics and decision neuroscience (see this post). EEG recording indicated that liberals' neural response to conflict were stronger when response inhibition was required. Hence liberalism is associated to a greater sensibility to response conflict, while conservatism is associated with a greater persistence in the habitual pattern. These results, say the authors, are

consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanism related to cognitive control and self-regulation
Thus valuing tradition vs. novelty, security vs. novelty might have sensorimotor counterpart, or symptoms. Of course, it does not mean that the neural basis of conservatism is identified, or the "liberal area", etc, but this study suggest how micro-tasks may help to elucidate, as the authors say in the closing sentence, "how abstract, seemingly ineffable constructs, such as ideology, are reflected in the human brain."

What this study--together with other data on conservatives and liberal--might justify is the following hypothesis: what if conservatives and liberals are natural kinds? That is, "homeostatic property clusters", (see Boyd 1991, 1999), categories of "things" formed by nature (like water, mammals, etc.), not by definition? (like supralunar objects, non-cat, grue emerald, etc.) Things that share surface properties (political beliefs and behavior) whose co-occurence can be explained by underlying mechanims (neural processing of conflict monitoring)? Maybe our evolution, as social animals, required the interplay of tradition-oriented and novelty-oriented individuals, risk-prone and risk-averse agents. But why, in the first place, evolution did not select one type over another? Here is another completely armchair hypothesis: in order to distribute, in the social body, the signal detection problem.

What kind of errors would you rather do: a false positive (you identify a signal but it's only noise) or a false negative (you think it's noise but it's a signal)? A miss or a false alarm? That is the kind of problems modeled by signal detection theory (SDT): since there is always some noise and you try to detect signal, you cannot know in advance, under radical uncertainty, what kind of policy you should stick to (risk-averse or risk-prone. "Signal" and "noise" are generic information-theoretic terms that may be related to any situation where an agent tries to find if a stimuli is present:




Is is rather ironic that signal detection theorists employ the term liberal* and conservative* (the "*" means that I am talking of SDT, not politics) to refer to different biases or criterions in signal detection. A liberal* bias is more likely to set off a positive response ( increasing the probability of false positive), whereas a conservative* bias is more likely to set off a negative response (increasing the probability of false negative). The big problem in life is that in certain domains conservatism* pay, while in others it's liberalism* who does (see Proust 2006): when identifying danger, a false negative is more expensive (better safe than sorry) whereas in looking for food a false positive can be more expensive better (better satiated than exhausted). So a robust criterion is not adaptive; but how to adjust the criterion properly? If you are an individual agent, you must altern between liberal* and conservative* criterion based on your knowledge. But if you are part of a group, liberal* and conservative* biases may be distributed: certains individuals might be more liberals* (let's send them to stand and keep watch) and other more conservatives* (let's send them foraging). Collectively, it could be a good solution (if it is enforced by norms of cooperation) to perpetual uncertainty and danger. So if our species evolved with a distribution of signal detection criterions, then we should have evolved different cognitive styles and personality traits that deal differently with uncertainty: those who favor habits, traditions, security, and the others. If liberal* and conservative* criterions are applied to other domains such as family (an institution that existed before the State), you may end up with the Strict Father model and the Nurturant Parent model; when these models are applied to political decision-making, you may end up with liberals/conservatives (no "*"). That would give a new meaning to the idea that we are, by nature, political animals.


Related posts
Links
References




9/7/07

A neuroecomic picture of valuation



Values are everywhere: ethics, politics, economics, law, for instance, all deal with values. They all involve socially situated decision-makers compelled to make evaluative judgment and act upon them. These spheres of human activity constitute three aspects of the same problem, that is, guessing how 'good' is something. This means that value 'runs' on valuation mechanisms. I suggest here a framework to understand valuation.

I define here valuation as the process by which a system maps an object, property or event X to a value space, and a valuation mechanism as the device implementing the matching between X and the value space. I do not suggest that values need to be explicitly represented as a space: by valuation space, I mean an artifact that accounts for the similarity between values by plotting each of them as a point in a multidimensional coordinate system. Color spaces, for instance, are not conscious representation of colors, but spatial depiction of color similarity along several dimensions such as hue, saturation brightness.

The simplest, and most common across the phylogenetic spectrum, value space has two dimensions: valence (positive or negative) and magnitude. Valence distinguishes between things that are liked and things that are not. Thus if X has a negative valence, it does not implies that X will be avoided, but only that it is disliked. Magnitude encodes the level of liking vs. disliking. Other dimensions might be added—temporality (whether X is located in the present, past of future), other- vs. self-regarding, excitatory vs. inhibitory, basic vs. complex, for instance—but the core of any value system is valence and magnitude, because these two parameters are required to establish rankings. To prefer Heaven to Hell, Democrats to Republicans, salad to meat, or sweet to bitter involves valence and magnitude.

Nature endowed many animals (mostly vertebrates) with fast and intuitive valuation mechanisms: emotions.[1] Although it is a truism in psychology and philosophy of mind and that there is no crisp definition of what emotions are[2], I will consider here that an emotion is any kind of neural process whose function is to attribute a valence and a magnitude to something else and whose operative mode are somatic markers. Somatic markers[3] are bodily states that ‘mark’ options as advantageous/disadvantageous, such as skin-conductance, cardiac rhythm, etc. Through learning, bodily states become linked to neural representations of the stimuli that brought these states. These neural structures may later reactivate the bodily states or a simulation of these states and thereby indicate the valence and magnitude of stimuli. These states may or may not account for many legitimate uses of the word “emotions”, but they constitute meaningful categories that could identify natural kinds[4]. In order to avoid confusion between folk-psychological and scientific categories, I will rather talk of affects and affective states, not emotions.

More than irrational passions, affectives states are phylogenetically ancient valuation mechanisms. Since Darwin,[5] many biologists, philosophers and psychologists[6] have argued that they have adaptive functions such as focusing attention and facilitating communication. As Antonio Damasio and his colleagues discovered, subjects impaired in affective processing are unable to cope with everyday tasks, such as planning meetings[7]. They lose money, family and social status. However, they were completely functional in reasoning or problem-solving tasks. Moreover, they did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. They were unable to use affect to aid in decision-making, a hypothesis that entails that in normal subjects, affect do aid in decision-making. These findings suggest that decision-making needs affect, not as a set of convenient heuristics, but as central evaluative mechanisms. Without affects, it is possible to think efficiently, but not to decide efficiently (affective areas, however, are solicited in subjects who learn to recognize logical errors[8]).

Affects, and specially the so-called ‘basic’ or ‘core’ ones such as anger, disgust, liking and fear[9] are prominent explanatory concepts in neuroeconomics. The study of valuation mechanisms reveals how the brain values certain objects (e.g. money), situations (e.g. investment, bargaining) or parameter (risk, ambiguity) of an economic nature. Three kinds of mechanisms are typically involved in neuroeconomic explanations:

  1. Core affect mechanisms, such as fear (amygdala), disgust (anterior insula) and pleasure (nucleus accumbens), encode the magnitude and valence of stimuli.
  2. Monitoring and integration mechanisms (ventromedial/mesial prefrontal, orbitofrontal cortex, anterior cingulate cortex) combine different values and memories of values together
  3. Modulation and control mechanisms (prefrontal areas, especially the dorsolateral prefrontal cortex), modulate or even override other affect mechanisms.

Of course, there is no simple mapping between psychological functions and neural structures, but cognitive and affective neuroscience assume a dominance and a certain regularity in functions. Disgust does not reduce to insular activation, but anterior insula is significantly involved in the physiological, cognitive and behavioral expressions of disgust. There is a bit of simplification here—due to the actual state of science—but enough to do justice to our best theories of brain functioning. I will here review two cases of individual and strategic decision-making, and will show how affective mechanisms are involved in valuation[10].

In a study by Knutson et al.,[11] subjects had to choose whether or not they would purchase a product (visually presented), and then whether or not they would buy it at a certain price. While desirable products caused activation in the nucleus accumbens, activity is detected in the insula when the price is seen as exaggerated. If the price is perceived as acceptable, a lower insular activation is detected, but mesial prefrontal structures are more solicited. The activation in these areas was a reliable predictor of whether or not subjects would buy the product: prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing. Thus purchasing decision involves a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). A chocolate box—a stimulus presented to the subjects—is located in the high-magnitude, positive-valence regions of the value space, while the same chocolate box priced at $80 is located in the high-magnitude, negative-valence regions of the space.

In the ultimatum game, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer. The offer is a split of an amount of money. If the responder accepts, she keeps the offered amount while the proposer keeps the difference. If she rejects it, however, both players get nothing. Orthodox game theory recommends that proposers offer the smallest possible amount, while responder should accept every proposition, but all studies confirm that subjects make fair offer (about 40% of the amount) and reject unfair ones (less than 20%)[12]. Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula is more active when unfair offers are proposed,[13] and insular activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers[14]. Moreover, unfair offers are associated with greater skin conductance[15]. Visceral and insular responses occur only when the proposer is a human: a computer does not elicit those reaction. Beside the anterior insula, two other areas are recruited in ultimatum decisions: the dorsolateral prefrontal cortex (DLPFC). When there is more activity in the anterior insula than in the DLPFC, unfair offers tend to be rejected, while they tend to be accepted when DLPFC activation is greater than anterior insula.

These two experiments illustrate how neuroeconomics begin to decipher the value spaces and how valuation relies on affective mechanisms. Although human valuation is more complex than the simple valence-magnitude space, this ‘neuro-utilitarist’ framework is useful for interpreting imaging and behavioral data: for instance, we need an explanation for insular activation in purchasing and ultimatum decision, and the most simple and informative, as of today, is that it trigger a simulated disgust. More generally, it also reveals that the human value space is profoundly social: humans value fairness and reciprocity. Cooperation[16] and altruistic punishment[17] (punishing cheaters at a personal cost when the probability of future interactions is null), for instance, activate the nucleus accumbens and other pleasure-related areas. People like to cooperate and make fair offers.

Neuroeconomic experiments also indicate how value spaces can be similar across species. It is known for instance that in humans, losses[18] elicit activity in fear-related areas such as amygdala. Since capuchin monkey’s behavior also exhibit loss-aversion[19] (i.e., a greater sensitivity to losses than to equivalent gains), behavioral evidence and neural data suggests that the neural implementation of loss-aversion in primates shares common valuation mechanisms and processing. The primate—and maybe the mammal or even the vertebrate—value space locate loss in a particular region.

Notes

  • [1] (Bechara & Damasio, 2005; Bechara et al., 1997; Damasio, 1994, 2003; LeDoux, 1996; Naqvi et al., 2006; Panksepp, 1998)
  • [2] (Faucher & Tappolet, 2002; Griffiths, 2004; Russell, 2003)
  • [3] (Bechara & Damasio, 2005; Damasio, 1994; Damasio et al., 1996)
  • [4] (Griffiths, 1997)
  • [5] (Darwin, 1896)
  • [6] (Cosmides & Tooby, 2000; Paul Ekman, 1972; Griffiths, 1997)
  • [7] (Damasio, 1994)
  • [8] (Houde & Tzourio-Mazoyer, 2003)
  • [9] (Berridge, 2003; P. Ekman, 1999; Griffiths, 1997; Russell, 2003; Zajonc, 1980)
  • [10] The material for this part is partly drawn from (Hardy-Vallée, forthcoming)
  • [11] (Knutson et al., 2007).
  • [12] (Oosterbeek et al., 2004)
  • [13] (Sanfey et al., 2003)
  • [14] (Sanfey et al., 2003: 1756)
  • [15] (van 't Wout et al., 2006)
  • [16] (Rilling et al., 2002).
  • [17] (de Quervain et al., 2004).
  • [18] (Naqvi et al., 2006)
  • [19] (Chen et al., 2006)

References

  • Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336.
  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding Advantageously Before Knowing the Advantageous Strategy. Science, 275(5304), 1293-1295.
  • Berridge, K. C. (2003). Pleasures of the brain. Brain and Cognition, 52(1), 106.
  • Chen, M. K., Lakshminarayanan, V., & Santos, L. (2006). How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal of Political Economy, 114(3), 517-537.
  • Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. Handbook of Emotions, 2, 91-115.
  • Damasio, A. R. (1994). Descartes' error : emotion, reason, and the human brain. New York: Putnam.
  • Damasio, A. R. (2003). Looking for Spinoza : joy, sorrow, and the feeling brain (1st ed.). Orlando, Fla. ; London: Harcourt.
  • Damasio, A. R., Damasio, H., & Christen, Y. (1996). Neurobiology of decision-making. Berlin ; New York: Springer.
  • Darwin, C. (1896). The expression of the emotions in man and animals ([Authorized ed.). New York,: D. Appleton.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254-1258.
  • Ekman, P. (1972). Emotion in the human face: guide-lines for research and an integration of findings. New York,: Pergamon Press.
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion (pp. 45-60). Sussex John Wiley & Sons, Ltd.
  • Faucher, L., & Tappolet, C. (2002). Fear and the focus of attention. Consciousness & emotion, 3(2), 105-144.
  • Griffiths, P. E. (1997). What emotions really are : the problem of psychological categories. Chicago, Ill.: University of Chicago Press.
  • Griffiths, P. E. (2004). Emotions as Natural and Normative Kinds. Philosophy of Science, 71, 901–911.
  • Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass.
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53(1), 147-156.
  • LeDoux, J. E. (1996). The emotional brain : the mysterious underpinnings of emotional life. New York: Simon & Schuster.
  • Naqvi, N., Shiv, B., & Bechara, A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective. Current Directions in Psychological Science, 15(5), 260-264.
  • Oosterbeek, H., S., R., & van de Kuilen, G. (2004). Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7, 171-188.
  • Panksepp, J. (1998). Affective neuroscience : the foundations of human and animal emotions. New York: Oxford University Press.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychol Rev, 110(1), 145-172.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755-1758.
  • van 't Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Exp Brain Res, 169(4), 564-568.
  • Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151-175.





7/24/07

The development of loss-aversion

An agent is loss-averse if the absolute value of loosing X (say, $100) is higher than the absolute value of gaining X: if loosing $100 "hurts more" than receiving it feels good. This bias is a robust finding in psychology. A new paper in Developmental Science indicates that loss-aversion unfolds, in the lifetime in three different stages. Children, adolescent and adults display, in the Iowa Gambling Task, different patterns that suggest a developmental continuum in loss-aversion:

  • (a) guessing with a slight tendency to consider frequency of loss to
  • (b) focusing on frequency of loss, to
  • (c) considering both frequency and amount of probabilistic loss.
Hence we all start with a sensitivity to losses (but only to their frequency), and when we are equipped with more complex cognitive aptitudes we pay attention to the value of losses. According to Huizenga et al, the development of proportional reasoning explains the increased complexity of loss-aversion.



7/19/07

Beautiful picture of brain areas involved in decision-making

Found yesterday, in a paper by Sanfey (nice review paper, by the way):




"Fig. 2. Map of brain areas commonly found to be activated in decision-making studies. The sagittal section (A) shows the location of the anterior cingulate cortex (ACC), medial prefrontal cortex (MPFC), orbitofrontal cortex (OFC), nucleus accumbens (NA), and substantia nigra (SN). The lateral view (B) shows the location of the dorsolateral prefrontal cortex (DLPFC) and lateral intraparietal area (LIP). The axial section (C; cut along the white line in A and B) shows the location of the insula (INS) and basal ganglia (BG)."
from:



5/18/07

The psychopath, the prisoner's dilemma and the invisible hand of morality

In the prisoner’s dilemma the police holds, in separate cells, two individuals accused of robbing a bank. The suspects (let’s call them Bob and Alice) are unable to communicate with each others. The police offers them the following options: confess or remain silent. If one confesses – implicating his or her partner – and the other remains silent, the former goes free while the other gets a 10 years sentence. If they both confess, they will serve a 5-years sentence. If they both remain silent, the sentence will be reduced to 2 years. The situation can be represented as the following payoff matrix:



Assuming that Bob and Alice have common knowledge – everybody knows that everybody knows that everybody knows, etc., ad infinitum – of each other’s rationality and the rules of the game, they should confess. Indeed, they will expect each other to make the best move, which is confessing, since confessing gives you either freedom or a 5-years sentence, while remaining silent entails either a 2-years or a 10-years sentence. The best reply to this move is also confessing, thus we expect Bob and Alice to confess. Even if they would be better-off in remaining silent, this choice is suboptimal: they risk a 10-years sentence if the other does not remain silent. In other words, they should not choose the cooperative move.

Experimental game theory indicates that subjects cooperate massively in prisoner’s dilemma. Recently, neuroeconomics showed that players enjoy cooperating, what economists refer to as the “warm glow of giving”. In the prisoner’s dilemma, players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas (Rilling et al. 2002).

In a new paper, Rilling and its collaborators (2007) now investigate how psychopathy influences cooperation in the prisoner's dilemma. Their subjects were not psychopaths per se: instead, they used normal individuals and--with a questionnaire--rated their attitudes on a "psychopathy scale". While in a scanner, they were then asked to play a prisoner's dilemma with nonscanned subjects. They were in fact playing against a computer following the "forgiving tit-for-tat" strategy", analogous to tit-for-tat excepts that it reciprocates previous defection only 67% of the time.

Behavioral results indicate that psychopathy is correlated with defection, even after mutual cooperation. One explanation could be that psychopaths have impaired amygdala, and hence are less sensible to aversive conditioning. This is coherent with fMRI data that suggests that the Cooperate-Defect outcome (I cooperate, you defect) elicit less aversive reaction in individual who score higher in psychopathy. Moreover, choosing to defect elicited more activity in the ACC and DLPFC (areas classically involved in emotional modulation and cognitive control), suggesting that defecting is effortful. Psychopathy, however, is correlated with less activity in these areas: it thus seems easier for psychopathic personalities to be non-cooperative. "Regular" people need more cognitive effort to override their cooperative biases.

fMRI suggest also that low-psychopathy and high-psychopathy subjects differs on how their brain implements cooperative behavior: while the formers rely on emotional biases (strong activation in the OFC, weak activation in DLPFC), the latters rely on cognitive control (weak activation in the OFC, strong activation in DLPFC). High-psychopathy subjects would be, according to Rilling et al., weakly emotionally biased toward defection: they exhibit a stronger OFC activation and a weaker DLPFC for defection. Thus, it seems that normal subjects tend to experiments the immediate gratification of cooperation, independently of the monetary payoff. Psychopaths do not feel the "warm glow" of cooperation, and thus do not cooperate.

Philosophically, there is an interesting lessons here: both low-psychopathy and high-psychopathy subjects follow their own, selfish biases: low-psychopathy ones enjoy cooperating, and high-psychopathy prefer defecting. This is consistent with a thesis I will one day describe more thoroughly another day, the "Invisible Hand of Morality": like markets, morality emerges out of the interaction of selfish agents. Luckily, thanks to evolution, culture, education, norms, etc., normal people selfishness tends to be geared toward cooperation. Psychopaths are not more selfish than normal people: their selfishness do not value cooperation, or other social virtues. Thus morality is not (only) "in the head": it is partly distributed is sensorimotor/somatovisceral mechanisms, cultural habits, external cues, institutions, etc. The other lesson is that morality is multi-realizable: it can be realized through emotional biases or cognitive control.

  • Rilling, J. K., Glenn, A. L., Jairam, M. R., Pagnoni, G., Goldsmith, D. R., Elfenbein, H. A., et al. (2007). Neural correlates of social cooperation and non-cooperation as a function of psychopathy. Biological Psychiatry, 61(11), 1260-1271.
  • Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., & Kilts, C. (2002). A neural basis for social cooperation. Neuron, 35(2), 395-405.



3/28/07

Anterior cingulate choices and orbitofrontal preferences

With all the studies in neuroeconomics, it is hard to get the whole picture of decision-making. In a paper in Trends in Cognitive Science, Rushworth et al. review the contribution of two important areas: the anterior cingulate cortex (ACC) and orbitofrontal cortex (OFC). After reviewing many studies, the authors conclude that they substantially contribute , respectively, to the generation of reward-based action and to representation of value. The OFC thus encodes values, expectations and preferences (patient with OFC lesions are impaired in their decision-making abilities because they cannot asses the utility of different options). The ACC is more concerned about the values of action and the generation of exploratory actions and their valuation. Thus, to make an extremely simplistic description, OFC is about preferences and ACC about choices, the two most important components of decision making: in making a rational decision, one chooses to do A because one prefers A to other options:

The OFC is important when reinforcement is associated with stimuli and for the representation of preferences. It is critical when behaviour depends on detailed, flexible and adjustable predictions of outcomes or on models of the reinforcement environment. In the ACC, reward representation is closely bound to action or task representation. This means that the ACC mediates the relationship between the previous action-reinforcement history and the next action choice.
ACC is also more involved in social cognition.

The following image depicts the connections between OFC, ACC and other areas. As you can see, OFC is a little more on the "input side" while ACC is on the "ouptut side". In both cases, the amygdala (involved in fear, memory, learning and attention) and the ventral striatum (reward processing and motivation) are important players in this game:

We are far from having the whole picture, or a neuroeconomic Theory of Everything, but these syntheses help understanding the mechanisms of decision-making. The next big step, I guess, would be the integration of this connectivity pattern with the function of dopaminergic neurons, thought to implements TD-learning algorithms (see this previous post)

In any case, whatever will be the details, it is clear that a theory of decision-making will be a theory of "affective management". In a historical-philosophical perspective, all these researches can be seen as a reactualization of the intellectualism/voluntarism dispute According to intellectualism, a rational action is the product of a reasoning process that determines what is good, while voluntarism take the action as the product of a motivation. Neuroeconomics, hedonic psychology and affective cognition all suggest a contemporary form of voluntarism.