Natural Rationality | decision-making in the economy of nature
Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

4/2/08

Utilitarianism and the Brain

an interesting debate about the interpretation of neuroscientific results: (from the X-Phi Blog):

Guy Kahane and Nicholas Shackel have a new paper out in Nature that criticizes recent neuroscientific work on moral judgment and utilitarian bias.  One of their stalking horses is a paper by Koenigs et al. that also appeared recently in Nature.  The original Koenigs et al. paper can be found here and their reply to the Kahane and Shackel piece can be found here

Blogged with the Flock Browser



3/26/08

Nature Neuroscience Special Issue about Decision Neuroscience

The last issue of Nature Neuroscience features 4 great papers on the neuroscience of decision-making:

  • Choice, uncertainty and value in prefrontal and cingulate cortex
    Matthew F S Rushworth and Timothy E J Behrens
  • Risky business: the neuroeconomics of decision making under uncertainty
    Michael L Platt and Scott A Huettel
  • Game theory and neural basis of social decision making
    Daeyeol Lee
  • Modulators of decision making
    Kenji Doya
Enjoy!



3/11/08

Why Neuroeconomics Needs a Concept of (Natural) Rationality

ResearchBlogging.orgNeuroeconomists (more than “decision neuroscientists”) often report their finding as strong evidence against the rationality of decision-makers. In the case of cooperation it is often claimed that emotions motivate cooperation since neural activity elicited by cooperation overlaps with neural activity elicited by hedonic rewards (Fehr & Camerer, 2007). Also, when subjects have to choose whether or not they would purchase a product, desirable products cause activation in the nucleus accumbens (associated with anticipation of pleasure). However, if the price is seen as exaggerated, activity is detected in the insula (involved in disgust and fear; Knutson 2007).

The accumulation of evidence about the engagement of affective areas in decison-making is undisputable, and seems to make a strong case against a once pervasive “rationalist” vision of decision-making in cognitive science and economics. This is not, however, a definitive argument for emotivism (we choose with our "gut feelings") and irrationalism. For at least three reasons (methodological, empirical and conceptual), these findings should not be seen as supporting an emotivist account.

First, characterizing a brain area as “affective” or “emotional” is misleading. There is no clear distinction, in the brain, between affective and cognitive areas. For instance, the anterior insula is involved in disgust, but also in disbelief (Harris et al., 2007). A high-level task such as cognitive control (e.g. holding items in working memory in a goal-oriented task) requires both “affective” and “cognitive” areas (Pessoa, 2008). The affective/cognitive distinction is a folk-psychological one, not a reflection of brain anatomy and connectivity. There is a certain degree of specialization, but generally speaking any task recruits a wide arrays of areas, and each area is redeployed in many tasks. In complex being like us, so-called “affective” areas are never purely affective: they always contribute to higher-level cognition, such as logical reasoning (Houde & Tzourio-Mazoyer, 2003). Similarly, while the amygdala has been often described as a “fear center”, its function is much more complex, as it modulates emotional information, react to unexpected stimuli and is heavily recruited in visual attention, a “cognitive” function. It is therefore wrong to consider “affective” areas as small emotional agents that are happy or sad and make us happy of sad. Instead of employing folk-psychological categories, their functional contribution should be understood in computational terms: how they process signals, how information is routed between areas and how they affect behavior and thought.

Second, even if there are affective areas, they are always complemented or supplemented by “cognitive” ones: the dorsolateral prefrontal cortex (DLPFC) for instance (involved in cognitive control and goal maintenance), is recruited in almost all decision-making task, and has been shown to be involved in norm-compliant behavior and purchasing decisions. In the ultimatum game, beside the anterior insula, two other areas are recruited: the DLPFC and the anterior cingulate cortex (ACC), involved in cognitive conflict and emotional modulation. Explainiations of ultimatum decisions spell out neural information-processing mechanisms, not “emotions”.

Check for instance the neural circuitry involved in cognitive control: you would think it is only prefrontal areas, but as it turns out, "cognitive" and "affective" area sare required for this competence:


[Legend: This extended control circuit contains traditional control areas, such as the anterior cingulate cortex (ACC) and the lateral prefrontal cortex (LPFC), in addition to other areas commonly linked to affect (amygdala) and motivation (nucleus accumbens). Diffuse, modulatory effects are shown in green and originate from dopamine-rich neurons from the ventral tegmental area (VTA). The circuit highlights the cognitive–affective nature of executive control, in contrast to more purely cognitive-control proposals. Several connections are not shown to simplify the diagram. Line thickness indicates approximate connection strength. OFC, orbitofrontal cortex.From Pessoa, 2008]

As Michael Anderson pointed out in a series of papers (2007a and b, among others), there is many-to-many mapping between brain functions and cognitive functions. So the concept of "emotional areas" should be banned from neuroeconomics vocabulary before it is too late.

Third, a point that has been neglected by many research about decision-making neural activation of a particular brain area is always explanatory with regard to its contribution in understanding personal-level properties. If we learn that the anterior insula react to unfair offers, we are not singling out the function of this area, but explaining how the person’s decision is responsive to a particular type of valuation. The basic unit of analysis of decisions is not neurons, but judgments. We may study sub-judgmental (e.g. neural) mechanisms and how they contribute to judgment formation; or we may study supra-judgmental mechanisms (e.g. reasoning) and how they articulate judgments. Emotions, as long as they are understood as affective reactions, are not judgments: they either contribute to judgments or are construed as judgments. In both case, the category “emotions” seems superfluous for explaining the nature of the judgment itself. Thus, if judgments are the basic unit of analysis, brain areas are explanatory insofar as they make explicit how individuals arrive at a certain judgment, how it is implemented, etc: what kind of neural computations are carried out? Take, for example, cooperation in the prisoner's dilemma. Imaging studies show that when high-psychopathy and low-psychopathy subjects choose to cooperate, different neural activity is observed: the former use more prefrontal areas than the latter, indicating that cooperation is more efforful (see this post). This is instructive: we learn something about the information- processing not about "emotions" or "reason".

In the end, we want to know how these mechanisms fix beliefs, desires and intentions: neuroeconomics can be informative as long as it aims at deciphering human natural rationality.


References
  • Anderson, M. L. (2007a). Evolution of Cognitive Function Via Redeployment of Brain Areas. Neuroscientist, 13(1), 13-21.
  • Anderson, M. L. (2007b). The Massive Redeployment Hypothesis and the Functional Topography of the Brain. Philosophical Psychology, 20(2), 143 - 174.
  • Fehr, E., & Camerer, C. F. (2007). Social Neuroeconomics: The Neural Circuitry of Social Preferences. Trends Cogn Sci.
  • Harris, S., Sheth, S. A., & Cohen, M. S. (2007). Functional Neuroimaging of Belief, Disbelief, and Uncertainty. Annals of Neurology,
  • Houde, O., & Tzourio-Mazoyer, N. (2003). Neural Foundations of Logical and Mathematical Cognition. Nature Reviews Neuroscience, 4(6), 507-514.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Spitzer, M., Fischbacher, U., Herrnberger, B., Gron, G., & Fehr, E. (2007). The Neural Signature of Social Norm Compliance. Neuron, 56(1), 185-196.




10/11/07

Resources on law, neuroscience, and "neurolaw"

From http://lawandneuroscienceproject.org/resources via Neuroethics and Law Blog.

Readings on Law and Neuroscience

Bibliography on Law and Biology

Blog on Neuroethics and Law



10/4/07

Social Neuroeconomics: A Review by Fehr and Camerer

Ernst Fehr and Colin Camerer, two prominent experimental/behavioral/neuro-economists published a new paper in Trends in Cognitive Science on social neuroeconomics. Discussing many studies (this paper is a state-of-the-art review), they conclude that

social reward activates circuitry that overlaps, to a surprising degree, with circuitry that anticipates and represents other types of rewards. These studies reinforce the idea that social preferences for donating money, rejecting unfair offers, trusting others and punishing those who violate norms, are genuine expressions of preference

The authors illustrate this overlap with a the following picture: social and non-social reward elicit similar neural activation (see references for all cited studies at the end of this post):



Figure 1. (from Fehr and Camerer, forthcoming). Parallelism of rewards for oneself and for others: Brain areas commonly activated in (a) nine studies of social reward (..), and (b) a sample of six studies of learning and anticipated own monetary reward (..).

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents's strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.


Related posts:

Reference:
  • Fehr, E. and Camerer, C.F., Social neuroeconomics: the neural circuitry of social preferences, Trends Cogn. Sci. (2007), doi:10.1016/j.tics.2007.09.002


Studies of social reward cited in Fig. 1:

  • [26] J. Rilling et al., A neural basis for social cooperation, Neuron 35 (2002), pp. 395–405.
  • [27] J.K. Rilling et al., Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways, Neuroreport 15 (2004), pp. 2539–2543.
  • [28] D.J. de Quervain et al., The neural basis of altruistic punishment, Science 305 (2004), pp. 1254–1258.
  • [29] T. Singer et al., Empathic neural responses are modulated by the perceived fairness of others, Nature 439 (2006), pp. 466–469
  • [30] J. Moll et al., Human fronto-mesolimbic networks guide decisions about charitable donation, Proc. Natl. Acad. Sci. U. S. A. 103 (2006), pp. 15623–15628.
  • [31] W.T. Harbaugh et al., Neural responses to taxation and voluntary giving reveal motives for charitable donations, Science 316 (2007), pp. 1622–1625.
  • [32] Tabibnia, G. et al. The sunny side of fairness – preference for fairness activates reward circuitry. Psychol. Sci. (in press).
  • [55] T. Singer et al., Brain responses to the acquired moral status of faces, Neuron 41 (2004), pp. 653–662.
  • [56] B. King-Casas et al., Getting to know you: reputation and trust in a two-person economic exchange, Science 308 (2005), pp. 78–83.

Studies of learning and anticipated own monetary reward cited in Fig. 1:

  • [33] S.M. Tom et al., The neural basis of loss aversion in decision-making under risk, Science 315 (2007), pp. 515–518.
  • [61] M. Bhatt and C.F. Camerer, Self-referential thinking and equilibrium as states of mind in games: fMRI evidence, Games Econ. Behav. 52 (2005), pp. 424–459.
  • [73] P.K. Preuschoff et al., Neural differentiation of expected reward and risk in human subcortical structures, Neuron 51 (2006), pp. 381–390.
  • [74] J. O’Doherty et al., Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science 304 (2004), pp. 452–454.
  • [75] E.M. Tricomi et al., Modulation of caudate activity by action contingency, Neuron 41 (2004), pp. 281–292.



9/25/07

My brain has a politics of its own: neuropolitic musing on values and signal detection

Political psychology (just as politicians and voters) identifies two species of political values: left/right, or liberalism/conservatism. Reviewing many studies, Thornhill & Fincher (2007) summarizes the cognitive style of both ideologies:

Liberals tend to be: against, skeptical of, or cynical about familiar and traditional ideology; open to new experiences; individualistic and uncompromising, pursuing a place in the world on personal terms; private; disobedient, even rebellious rulebreakers; sensation seekers and pleasure seekers, including in the frequency and diversity of sexual experiences; socially and economically egalitarian; and risk prone; furthermore, they value diversity, imagination, intellectualism, logic, and scientific progress. Conservatives exhibit the reverse in all these domains. Moreover, the felt need for order, structure, closure, family and national security, salvation, sexual restraint, and self-control, in general, as well as the effort devoted to avoidance of change, novelty, unpredictability, ambiguity, and complexity, is a well-established characteristic of conservatives. (Thornhill & Fincher, 2007).
In their paper, Thornhill & Fincher presents an evolutionary hypothesis for explaining the liberalism/conservatism ideologies: both originate from innate adaptation to attachement, parametrized by early childhood experiences. In another but related domain Lakoff (2002) argued that liberals and conservatives differs in their methaphors: both view the nation or the State as a child, but they hold different perspectives on how to raise her: the Strict Father model (conservatives) or the Nurturant Parent model (liberals); see an extensive description here). The first one

posits a traditional nuclear family, with the father having primary responsibility for supporting and protecting the family as well as the authority to set overall policy, to set strict rules for the behavior of children, and to enforce the rules [where] [s]elf-discipline, self-reliance, and respect for legitimate authority are the crucial things that children must learn.


while in the second:

Love, empathy, and nurturance are primary, and children become responsible, self-disciplined and self-reliant through being cared for, respected, and caring for others, both in their family and in their community.
In the October issue of Nature Neuroscience, a new research paper by Amodio et al. study the "neurocognitive correlates of liberalism and conservatism". The study is more modest than the title suggests. Subject were submitted to the same test, a Go/No Go task (click when you see a "W" don't click when it's a "M"). The experimenters then trained the subjects to be used to the Go stimuli; on a few occasions, they were presented with the No Go stimuli. Since they got used to the Go stimuli, the presentation of a No Go creates a cognitive conflict: balancing the fast/automatic/ vs. the slow/deliberative processing. You have to inhibit an habit in order to focus on the goal when the habit goes in the wrong direction. The idea was to study the correlation between political values and conflict monitoring. The latter is partly mediated by the anterior cingulate cortex, a brain area widely studied in neuroeconomics and decision neuroscience (see this post). EEG recording indicated that liberals' neural response to conflict were stronger when response inhibition was required. Hence liberalism is associated to a greater sensibility to response conflict, while conservatism is associated with a greater persistence in the habitual pattern. These results, say the authors, are

consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanism related to cognitive control and self-regulation
Thus valuing tradition vs. novelty, security vs. novelty might have sensorimotor counterpart, or symptoms. Of course, it does not mean that the neural basis of conservatism is identified, or the "liberal area", etc, but this study suggest how micro-tasks may help to elucidate, as the authors say in the closing sentence, "how abstract, seemingly ineffable constructs, such as ideology, are reflected in the human brain."

What this study--together with other data on conservatives and liberal--might justify is the following hypothesis: what if conservatives and liberals are natural kinds? That is, "homeostatic property clusters", (see Boyd 1991, 1999), categories of "things" formed by nature (like water, mammals, etc.), not by definition? (like supralunar objects, non-cat, grue emerald, etc.) Things that share surface properties (political beliefs and behavior) whose co-occurence can be explained by underlying mechanims (neural processing of conflict monitoring)? Maybe our evolution, as social animals, required the interplay of tradition-oriented and novelty-oriented individuals, risk-prone and risk-averse agents. But why, in the first place, evolution did not select one type over another? Here is another completely armchair hypothesis: in order to distribute, in the social body, the signal detection problem.

What kind of errors would you rather do: a false positive (you identify a signal but it's only noise) or a false negative (you think it's noise but it's a signal)? A miss or a false alarm? That is the kind of problems modeled by signal detection theory (SDT): since there is always some noise and you try to detect signal, you cannot know in advance, under radical uncertainty, what kind of policy you should stick to (risk-averse or risk-prone. "Signal" and "noise" are generic information-theoretic terms that may be related to any situation where an agent tries to find if a stimuli is present:




Is is rather ironic that signal detection theorists employ the term liberal* and conservative* (the "*" means that I am talking of SDT, not politics) to refer to different biases or criterions in signal detection. A liberal* bias is more likely to set off a positive response ( increasing the probability of false positive), whereas a conservative* bias is more likely to set off a negative response (increasing the probability of false negative). The big problem in life is that in certain domains conservatism* pay, while in others it's liberalism* who does (see Proust 2006): when identifying danger, a false negative is more expensive (better safe than sorry) whereas in looking for food a false positive can be more expensive better (better satiated than exhausted). So a robust criterion is not adaptive; but how to adjust the criterion properly? If you are an individual agent, you must altern between liberal* and conservative* criterion based on your knowledge. But if you are part of a group, liberal* and conservative* biases may be distributed: certains individuals might be more liberals* (let's send them to stand and keep watch) and other more conservatives* (let's send them foraging). Collectively, it could be a good solution (if it is enforced by norms of cooperation) to perpetual uncertainty and danger. So if our species evolved with a distribution of signal detection criterions, then we should have evolved different cognitive styles and personality traits that deal differently with uncertainty: those who favor habits, traditions, security, and the others. If liberal* and conservative* criterions are applied to other domains such as family (an institution that existed before the State), you may end up with the Strict Father model and the Nurturant Parent model; when these models are applied to political decision-making, you may end up with liberals/conservatives (no "*"). That would give a new meaning to the idea that we are, by nature, political animals.


Related posts
Links
References




9/22/07

Strong reciprocity, altruism and egoism

Proponents of the Strong Reciprocity Hypothesis (i.e., Bowles, Gintis, Boyd, Fehr, Heinrich, etc., I will call them “The Collective”) claim that human being are strong reciprocators: they are willing to sacrifice resources in order to reward fair and punish unfair behavior even if there is no direct or future reward. Thus we are, according to the Collective, innately endowed with pro-social preferences and aversion to inequity. Those who advocate strong reciprocity take it to be a a ‘genuine’ altruistic force, not explained by other motives. Strong reciprocity is here contrasted with weaker form of reciprocity, such as: cooperating with someone because of genetic relatedness (kinship), because one follows a tit-for-tat pattern (direct reciprocity), wants to establish a good reputation (indirect reciprocity) or displays signs of power or wealth (coslty signaling). Thus our species is made, ceteris paribus, of altruistic individuals that tend to cooperate with cooperators and punish defectors, even at a cost. Behavioral economics showed how people are willing to cooperate in games such as the prisoner’s dilemma, the ultimatum game or the trust game: they do not cheat in the first one, offer fair split in the second and transfer money in the third.

Could it be possible, however, that this so-called altruism is instrumental? I don’t think it is always, some cases require closer scrutiny. For instance, in the Ultimatum Game, there is a perfectly rational and egoist reason to make a fair offer, such as a 50-50 split: it is the best—from one’s point of view—solution to the trade-off between making a profit and proposing a split that the other player will accept: if you propose more, you loose more money: if you propose less, you risk a rejection. In non-market integrated culture where a 20-80 split is not seen as unfair, proposers routinely offer such splits, because they know it will be accepted.

It can also be instrumental in a more basic sense, for instance in participating to the propagation of our genes. For instance, (Madsen et al., 2007) showed that individuals behave more altruistically toward their own kin when there is a significant genuine cost (such as pain), an attitude also mirrored in study with questionnaires (Stewart-Williams, 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Finally, other studies showed that facial resemblances enhance trust (DeBruine, 2002). In each cases, we see a mechanisms whose function is to negotiate our investments in relationships in order to promote the copies of our genes housed in people who are, or look like, or could help us expand our kin. For instance, by simply viewing lingerie or picture of sexy women, men behave more fairly in the ultimatum game (Van den Bergh & Dewitte, 2006).

Many of these so-called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton et al., 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al., 1994). According to the present framework, it would be because there is no advantage in being fair.

When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley & Fessler, 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson et al., 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks(Bering et al., 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff & Norenzayan, in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban et al., 2007) showed that with a dozen participants, punishment expenditure tripled. In the trust game, players apply learned social rules and trust-building routines, but they hate when cheater enjoy what they themselves refrain from enjoying. Thus it feels good to reset the equilibrium. Again, appareant altruism is instrumental in personal satisfaction, at least in some occasions.

Hardy & Van Vugt, in their theory of competitive altruism suggest that

individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable (Hardy & Van Vugt, 2006)

Maybe agents are attempting to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social. A possible alternative approach is what I call ‘methodological hedonism’: let’s assume, at least for identifying cognitive mechanisms, that the brain, when in function normally, tries to maximize hedonic feelings, even in moral behavior. We use feelings to anticipate feelings in order to control our behavior toward a maximization of positive feelings and a minimization of negative ones. The ‘hot logic’ of emotions is more realist than the cold logic of traditional game theory but still preserve the idea of utility maximization (although “value” would be more appropriate). In this framework, altruistic behavior is possible, but need not to rely on altruistic cognition. Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. The initial hedonism is gradually modulated by social norms, by which agents learn how to maximize their utility given the norms. Luckily, however, biological and cultural evolution favored patterns of self-interest that promote social order to a certain extent: institutions, social norms, routines and cultures tend to structure morally our behavior. Thus understanding morality may amount to understand how individual’s egoism is modulated by social processes. There might be no need to posit an innate Strong Reciprocity. Or at least it is worth to explore other avenues!


Related posts



Suggested reading:


Update:

I forgot to mention a thorough presentation and excellent criticism of Strong Reciprocity:

important papers from the Collective are:
  • Bowles, S., & Gintis, H. (2004). The Evolution of Strong Reciprocity: Cooperation in Heterogeneous Populations. Theoretical Population Biology, 65(1), 17-28.
  • Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong Reciprocity, Human Cooperation, and the Enforcement of Social Norms. Human Nature, 13(1), 1-25.
  • Fehr, E., & Rockenbach, B. (2004). Human Altruism: Economic, Neural, and Evolutionary Perspectives. Curr Opin Neurobiol, 14(6), 784-790.
  • Gintis, H. (2000). Strong Reciprocity and Human Sociality. Journal of Theoretical Biology, 206(2), 169-179.


References


  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of Being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 12, 412-414.
  • Bering, J. M., McLeod, K., & Shackelford, T. K. (2005). Reasoning About Dead Agents Reveals Possible Adaptive Trends. Human Nature, 16(4), 360-381.
  • Bolton, G. E., Katok, E., & Zwick, R. (1998). Dictator Game Giving: Rules of Fairness Versus Acts of Kindness International Journal of Game Theory, 27 269-299
  • DeBruine, L. M. (2002). Facial Resemblance Enhances Trust. Proc Biol Sci, 269(1498), 1307-1312.
  • Haley, K., & Fessler, D. (2005). Nobody’s Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game. Evolution and Human Behavior, 26(3), 245-256.
  • Hardy, C. L., & Van Vugt, M. (2006). Nice Guys Finish First: The Competitive Altruism Hypothesis. Pers Soc Psychol Bull, 32(10), 1402-1413.
  • Hoffman, E., Mc Cabe, K., Shachat, K., & Smith, V. (1994). Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior, 7, 346–380.
  • Kurzban, R., DeScioli, P., & O'Brien, E. (2007). Audience Effects on Moralistic Punishment. Evolution and Human Behavior, 28(2), 75-84.
  • Madsen, E. A., Tunney, R. J., Fieldman, G., Plotkin, H. C., Dunbar, R. I. M., Richardson, J.-M., & McFarland, D. (2007). Kinship and Altruism: A Cross-Cultural Experimental Study. British Journal of Psychology, 98, 339-359.
  • Shariff, A. F., & Norenzayan, A. (in press). God Is Watching You: Supernatural Agent Concepts Increase Prosocial Behavior in an Anonymous Economic Game. Psychological Science.
  • Stewart-Williams, S. (2007). Altruism among Kin Vs. Nonkin: Effects of Cost of Help and Reciprocal Exchange. Evolution and Human Behavior, 28(3), 193-198.
  • Van den Bergh, B., & Dewitte, S. (2006). Digit Ratio (2d:4d) Moderates the Impact of Sexual Cues on Men's Decisions in Ultimatum Games. Proc Biol Sci, 273(1597), 2091-2095.




9/21/07

Neuroeconomics, folk-psychology, and eliminativism



conventional wisdom has long modeled our internal cognitive processes, quite wrongly, as just an inner version of the public arguments and justifications that we learn, as children, to construct and evaluate in the social space of the dinner table and the marketplace. Those social activities are of vital importance to our collective commerce, both social and intellectual, but they are an evolutionary novelty, unreflected in the brain’s basic modes of decision-making
(Churchland, 2006, p. 31).


The folk-psychological model of rationality construes rational decision-making as the product of a practical reasoning by which an agent infers, from her beliefs and desires, the right action to do. Truely, when we are asked to explain or predict actions, our intuitions lead us to describe them as the product of intentional states. In a series of studies, Malle and Knobe (Malle & Knobe, 1997, 2001) showed that folkpsychology is a language game where beliefs, desires and intentions are the main players. But using the intentional idiom does not mean that it picks out the real causes of action. This is where realist, instrumentalist and eliminativist accounts conflict. A realist account of beliefs and desires takes them to be real causal entities, an instrumentalist account treat them as useful fictions while an eliminativist account suggests that they are embedded in a faulty theory of mental functioning that should be eliminated (see Paul M. Churchland & Churchland, 1998; Dennett, 1987; Fodor, 1981). Can neuroeconomics shed light on this traditional debate in philosophy and cognitive science?

Neuroeconomics, I suggest, support an eliminativist approach of cognition. Just like contemporary chemistry does not explain combustion by a release of phlogiston (a substance supposed to exist in combustible bodies), cognitive science should stop explaining actions as the product of beliefs and desires. Behavioral regularities and neural mechanisms are sufficient to explain decision. When subjects evaluate whether or not they would buy a product, and whether or not the price seems justified, how informative is it to cite propositional attitudes as causes? The real entities involved in decision-makings are neural mechanisms involved in hedonic feelings, cognitive control, emotional modulation, conflict monitoring, planning, etc. Preferences, utility functions or practical reasoning, for instance, can explain purchasing, but they do not posit entities that can enter the “causal nexus” (Salmon, 1984). Neuroeconomics explains purchasing behavior not as an inference from beliefs-desire to action, but as a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). Prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing (Knutson et al., 2007). Hence the explanation of purchasing cites causes (brain areas) that explain the purchasing behavior as the product of a higher activation in prefrontal area and that justifies the decision to purchase: the agents had a stronger incentive to buy. A fully mechanistic account would, of course, details the algorithmic process performed by each area.
The belief-desire framework implicitly supposes that the causes of an action are those that an agent would verbally express when asked to justify her action. But on what grounds can this be justified?

Psychological and neural studies suggest rather a dissociation between the mechanisms that lead to actions and the mechanisms by which we explain them. Since Nisbett & Wilson (1977) seminal studies, research in psychology showed that the very act of explaining the intentional causes of our actions is a re-constructive process that might be faulty. Subjects give numerous reasons as to why they prefer one pair of socks (or other objects) to another, but they all prefer the last one on the right. The real explanation of their preferences is a position effect, or right-hand bias. For some reason, subjects pick the right-hand pair and, post hoc, generate an explanation for this preference, a phenomena widely observed. For instance, when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and neuroimaging studies mirrored this branding effect. Sensory information (taste) and cultural information (brand) are associated with different areas that interact so as to bias preferences. Without the label, the drink evaluation relies solely on sensory information. Subjects may motivate their preferences for one beverage over another with many diverse arguments, but the real impact on their preference is the brand’s label. The conscious narrative we produce when rationalizing our actions are not “direct pipeline[s] to nonconscious mental processes” (Wilson & Dunn, 2004, p. 507) but approximate reconstructions. When our thoughts occur before the action, when they are consistent with the action and appear as the only cause of the action, we infer that these thoughts are the causes of the actions, and rule out other internal or external causes (Wegner, 2002). But the fact that we rely on the belief-desire framework to explain our and others’ action as the product of intentional states do not constitute an argument for considering that these states are satisfying causal explanation of action.

The belief-desire framework might be a useful conceptual scheme for fast and frugal explanations, but it does not make folkpsychological constructs suitable for scientific explanation. In the same vein, if folkbiology would be the sole foundation of biology, whales would still be categorized as fish. The nature of the biological world is not explained by our (faulty and biased) folkbiology, but by making explicit the mechanism of natural selection, reproduction, cellullar growth, etc. There is no reason to believe that our folkpsychology is a better description of mental mechanisms. Beliefs, desires and intentions are folk-psychological constructs that have no counterpart in neuroscience. Motor control and action planning, for instance, are explained by different kinds of representation such as forward and inverse models, not propositional attitudes (Kawato & Wolpert, 1998; Wolpert & Kawato, 1998). Consequently, the fact that we rely on fokpsychology to explain actions does not constitute an argument for considering that this naĂŻve theory provides reliable explanations of actions. Saying that the sun rises every morning is a good prediction, it could explains why there is more heat and light at noon, but the effectiveness of the sun-rising framework does not justifies its use as a scientific theory.

As many philosophers of science suggested, a genuine explanation is mechanistic: it consists in breaking a system in parts and process, and explaining how these parts and processes cause the system to behave the way it does (Bechtel & Abrahamsen, 2005; Craver, 2001; Machamer et al., 2000). Folkpsychology may save the phenomena, it still does not propose causal parts and processes. More generally, the problem with the belief-desire framework is that it is a description of our attitude toward things we call "agent", not a description of what constitutes the true nature of agents. Thus, it conflates the map and the territory. Moreover, conceptual advance is made when objects are described and classified according to their objective properties. A chemical theory that classifies elements according to their propensity to quench thirst would be a non-sense (although it could be useful in other context). At best, the belief-desire framework could be considered as an Everyday Handbook of Intentional Language.

References

  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.
  • Churchland, P. M. (2006). Into the Brain: Where Philosophy Should Go from Here. Topoi, 25(1), 29-32.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Craver, C. F. (2001). Role Functions, Mechanisms, and Hierarchy. Philosophy of Science, 68, 53-74.
  • Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Fodor, J. A. (1981). Representations : Philosophical Essays on the Foundations of Cognitive Science (1st MIT Press ed.). Cambridge, Mass.: MIT Press.
  • Kawato, M., & Wolpert, D. M. (1998). Internal Models for Motor Control. Novartis Found Symp, 218, 291-304; discussion 304-297.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking About Mechanisms. Philosophy of Science, 67, 1-24.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Malle, B. F., & Knobe, J. (2001). The Distinction between Desire and Intention: A Folk-Conceptual Analysis. In B. F. M. L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 45-67). Cambridge, MA: MIT Press.
  • McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84, 231-259.
  • Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the World. Princeton, N.J.: Princeton University Press.
  • Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
  • Wilson, T. D., & Dunn, E. W. (2004). Self-Knowledge: Its Limits, Value, and Potential for Improvement. Annual Review of Psychology, 55(1), 493-518.
  • Wolpert, D. M., & Kawato, M. (1998). Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11(7-8), 1317.



9/13/07

Philosophy of neuroscience: two recent papers

Sometimes, philosophers have relevant things to say about science. Here is two papers by two philosophers of (neuro)science I warmly recommend. The first one, by Michael Anderson, proposes a methodology to understand the contribution of different brain areas to cognitive functions (and to make sense of all these studies that say "area X does Z"):

Anderson, M.L. Massive redeployment, exaptation, and the functional integration of cognitive operations. Synthese, forthcoming.

The massive redeployment hypothesis (MRH) is a theory about the functional topography of the human brain, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components—originally developed to serve some specific purpose—were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs. If the evolution of cognition was indeed driven by such exaptation, then we should be able to make some specific empirical predictions regarding the resulting functional topography of the brain. This essay discusses three such predictions, and some of the evidence supporting them. Then, using this account as a background, the essay considers the implications of these findings for an account of the functional integration of cognitive operations. For instance, MRH suggests that in order to determine the functional role of a given brain area it is necessary to consider its participation across multiple task categories, and not just focus on one, as has been the typical practice in cognitive neuroscience. This change of methodology will motivate (even perhaps necessitate) the development of a new, domain-neutral vocabulary for characterizing the contribution of individual brain areas to larger functional complexes, and direct particular attention to the question of how these various area roles are integrated and coordinated to result in the observed cognitive effect. Finally, the details of the mix of cognitive functions a given area supports should tell us something interesting not just about the likely computational role of that area, but about the nature of and relations between the cognitive functions themselves. For instance, growing evidence of the role of “motor” areas like M1, SMA and PMC in language processing, and of “language” areas like Broca’s area in motor control, offers the possibility for significantly reconceptualizing the nature both of language and of motor control.
In the other paper, Chris Eliasmith presents the Engineering Framework (NEF), a simulation methodology.

Eliasmith, C. How to build a brain: from function to implementation. Synthese, forthcoming.


To have a fully integrated understanding of neurobiological systems, we must address two fundamental questions: 1. What do brains do (what is their function)? and 2. How do brains do whatever it is that they do (how is that function implemented)? I begin by arguing that these questions are necessarily inter-related. Thus, addressing one without consideration of an answer to the other, as is often done, is a mistake. I then describe what I take to be the best available approach to addressing both questions. Specifically, to address 2, I adopt the Neural Engineering Framework (NEF) of Eliasmith & Anderson [Neural engineering: Computation representation and dynamics in neurobiological systems. Cambridge, MA: MIT Press, 2003] which identifies implementational principles for neural models. To address 1, I suggest that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior. I show how these two answers will be mutually constraining, since the process of model selection for the statistical method in this approach can be informed by known anatomical and physiological properties of the brain, captured by the NEF. Similarly, the application of the NEF must be informed by functional hypotheses, captured by the statistical modeling approach.

Together, these two papers provides methodologies that contribute to a better understanding of the brain, its functions and its modelling. Check also their homepage for great papers on philosophy of neuroscience.


Links:




Cognitive Control and Dopamine: A Very Brief Intro

In certain situations, learned routines are not enough. When situations are too uncommon, dangerous and difficult or when they require the overcoming of a habitual response, decisions must be guided by representations. Acting upon an internal representation is referred to, in cognitive science, as cognitive control or executive function[1]. The agent is lead by a representation of a goal and will robustly readjust its behavior in order to maintain the pursuit of a goal. The behavior is then controlled ‘top-down’, not ‘bottom-up’. In the Stroop task, for instance, subject must identify the color of written words such as ‘red’, ‘blue' or ‘yellow’ printed in different colors (the word and the ink color do not match). The written word, however, primes the subject to focus on the meaning of the word instead of focusing on the ink’s color. If, for instance, the word “red” is written in yellow ink, subjects will utter “red” more readily than they say “yellow”. There is a cognitive conflict between the semantic priming induced by the word and the imperative to focus on the ink’s color. In this task, cognitive control mechanisms ought to give priority to goals in working memory (naming ink color) over external affordances (semantic priming). An extreme lack of cognitive control is exemplified in subjects who suffer from “environmental dependency syndrome”[2]: they will spontaneously do what their environment indicates of affords them: for instance, they will sit on a chair whenever they see one, or undress and get into a bed whenever they are in presence of a bed (even if it’s not in a bedroom).

Cognitive control is thought to happen mostly in the prefrontal cortex (PFC),[3] an area strongly innervated by midbrain dopaminergic fibers. Prefrontal areas activity is associated with maintenance and updating of cognitive representations of goals. Moreover, impairment of these areas results in executive control deficits (such as the environmental dependency syndrome). Since working memory is limited, however, agents cannot hold everything in their prefrontal areas. Thus the brain faces a tradeoff between attending to environmental stimuli (that may reveal rewards or danger, for instance) and maintaining representation of goals, viz. the tradeoff between rapid updating and active maintenance [4]. Efficiency requires brains to focus on relevant information and again, dopaminergic systems are involved in this process. According to many researches[5], dopaminergic activity implements a ‘gating’ mechanism, by which the PFC alternates between rapid updating and active maintenance. A higher level of dopamine in prefrontal area signals the need to rapidly update goals in working memory (rapid updating: ‘opening the gate’), while a lower level induces resistance to afferent signals and thus a focus on represented goals (active maintenance: ‘shutting the gate’). Hence dopaminergic neurons select which information (goal representation or external environment) is worth paying attention to. This mechanisms is thought to be implemented by different dopamine receptors, the D1 and D2 being responsive to different dopamine concentration (D1-low, D2-high):


Fig. 1 (From O'Reilly, 2006). Dopamine-based gating mechanism that emerges from the detailed biological model of Durstewitz, Seamans, and colleagues. The opening of the gate occurs in the dopamine D2-receptor–dominated state (State 1), in which any existing active maintenance is destabilized and the system is more responsive to inputs. The closing of the gate occurs in the D1-receptor–dominated state (State 2), which stabilizes the strongest activation pattern for robust active maintenance. D2 receptors are located synaptically and require high concentrations of dopamine and are therefore activated only during phasic dopamine bursts, which thus trigger rapid updating. D1 receptors are extrasynaptic and respond to lower concentrations, so robust maintenance is the default state of the system with normal tonic levels of dopamine firing.

Here is a neurobiological description of the phenomena, with neuroanatomical details:



Fig. 2. (From O'Reilly, 2006). Dynamic gating produced by disinhibitory circuits through the basal ganglia and frontal cortex/PFC (one of multiple parallel circuits shown). (A) In the base state (no striatum activity) and when NoGo (indirect pathway) striatum neurons are firing more than Go, the SNr (substantia nigra pars reticulata) is tonically active and inhibits excitatory loops through the basal ganglia and PFC through the thalamus. This corresponds to the gate being closed, and PFC continues to robustly maintain ongoing activity (which does not match the activity pattern in the posterior cortex, as indicated). (B) When direct pathway Go neurons in striatum fire, they inhibit the SNr and thus disinhibit the excitatory loops through the thalamus and the frontal cortex, producing a gating-like modulation that triggers the update of working memory representations in prefrontal cortex. This corresponds to the gate being open.

Hence it is interesting to note that dopaminergic neurons are involved in basic motivation and reinforcement, and in more abstract operations such as cognitive control.



Notes and references
  1. (Norman & Shallice, 1980; Shallice, 1988)
  2. (Lhermitte, 1986)
  3. (Duncan, 1986; Koechlin, Ody, & Kouneiher, 2003; Miller & Cohen, 2001; O’Reilly, 2006)
  4. (O’Reilly, 2006)
  5. (Montague, Hyman, & Cohen, 2004; O'Donnell, 2003; O’Reilly, 2006)

  • Durstewitz, D., Seamans, J. K., & Sejnowski, T. J. (2000). Dopamine-Mediated Stabilization of Delay-Period Activity in a Network Model of Prefrontal Cortex. Journal of Neurophysiology, 83(3), 1733-1750.
  • Duncan, J. (1986). Disorganization of behavior after frontal lobe damage. Cognitive Neuropsychology, 3(3), 271-290.
  • Koechlin, E., Ody, C., & Kouneiher, F. (2003). The Architecture of Cognitive Control in the Human Prefrontal Cortex. Science, 302(5648), 1181-1185.
  • Lhermitte, F. (1986). Human autonomy and the frontal lobes. Part 11: Patient behavior in complex and social situations: The “environmental dependency syndrome.” Annals of Neurology, 19(4), 335–343.
  • Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167-202.
  • Montague, P. R., Hyman, S. E., & Cohen, J. D. (2004). Computational roles for dopamine in behavioural control. Nature, 431(7010), 760.
  • Norman, D. A., & Shallice, T. (1980). Attention to Action: Willed and Automatic Control of Behavior: Center for Human Information Processing, University of California, San Diego.
  • O'Donnell, P. (2003). Dopamine gating of forebrain neural ensembles. European Journal of Neuroscience, 17(3), 429-435.
  • O’Reilly, R. C. (2006). Biologically Based Computational Models of High-Level Cognition Science, 314, 91-94.
  • Shallice, T. (1988). From neuropsychology to mental structure. Cambridge [England] ; New York: Cambridge University Press.



9/10/07

Social Cognition: A Special Issue of Science

The new edition of Science if devoted to Social Cognition. It
(...) explores the adaptive advantages of group life and the accompanying development of social skills. News articles examine clues from our primate cousins about the evolution of sophisticated social behavior and explorations of human behavior made possible by computer-generated realities. Review articles dissect the human capacity for prospection and the links between sociality and brain evolution and fitness. And related podcast segments highlight research on the social abilities of children and chimps and the value of virtual worlds to studies of social science

Four papers you don't want to miss:

Moreover, in the same edition, psychologists Dan Gilbert and Tim Wilson presents a theory of prospection, the anticipation of future events (a subject important for decision-making research:

All animals can predict the hedonic consequences of events they've experienced before. But humans can predict the hedonic consequences of events they've never experienced by simulating those events in their minds. Scientists are beginning to understand how the brain simulates future events, how it uses those simulations to predict an event's hedonic consequences, and why these predictions so often go awry.



8/28/07

The Political Brain

A book review in the NYT of Drew Westen's new book, "The Political Brain".

Stop Making Sense

Published: August 26, 2007

Between 2000 and 2006, a specter haunted the community of fundamentalist Democrats. Members of this community looked around and observed their moral and intellectual superiority. They observed that their policies were better for the middle classes. And yet the middle classes did not support Democrats. They tended to vote, in large numbers, for the morally and intellectually inferior party, the one, moreover, that catered to the interests of the rich.

How could this be?

Serious thinkers set to work, and produced a long shelf of books answering this question. Their answers tended to rely on similar themes. First, Democrats lose because they are too intelligent. Their arguments are too complicated for American voters. Second, Democrats lose because they are too tolerant. They refuse to cater to racism and hatred. Finally, Democrats lose because they are not good at the dark art of politics. Republicans, though they are knuckle-dragging simpletons when it comes to policy, are devilishly clever when it comes to electioneering. They have brilliant political consultants like Lee Atwater and Karl Rove, who frame issues so fiendishly, they can fool the American people into voting against their own best interests. (READ MORE)




8/22/07

Just (don't) do it: The neural correlate of the veto process

A study published in the new edition of the Journal of Neuroscience proposes that the dorsal fronto-median cortex (dFMC) is primarily involved in the inhibition of intentional action. Subjects had to inhibit a simple decision: choosing when to execute a simple key press while observing a rotating clock hand (the design is hence analogue to the famous Ben Libet's experiment on free will where he found out that "subjects perceived the intention to press as occurring before a conscious experience of actually moving". The difference being that this time, researchers have fMRI data (and not just EEG recording) and that subjects must choose and then inhibit. So here is that small piece of gray matter that inhibit you behavior:




(from Brass & Haggard, 2007)

Interestingly, their finding also suggest a top-down mechanisms for action inhibition:

Cognitive models of inhibition have focused on inhibition of prepotent responses to external stimuli (Logan et al., 1984; Cohen et al., 1990). An important distinction is made between "lateral" competitive interaction between alternative representations at a single level (Rumelhart and McClelland, 1986) and inhibitory top-down control signals from hierarchically higher brain areas (Norman and Shallice, 1986). The first idea would be consistent with a general decision process being involved. If the dFMC decides between action and inhibition by a competitive interaction process, then representations corresponding to the possibilities of action and to non-action should initially both be active, leading to activation in both action trials and inhibition trials. Our finding of minimal dFMC activation in action trials (...) argues against a view of endogenous inhibition based on competitive interaction between alternatives and thus is also not consistent with the idea of the dFMC being involved in a general decision process. In contrast, our result is consistent with a specific top-down control signal gating the neural pathways linking intention to action. This view is supported by the negative correlation between dFMC activation and primary motor cortex activation.



References



8/1/07

Special issues of the Journal of Neuroscience on decision-making

The new edition of The Journal of Neuroscience features six (!!) Mini-Reviews papers (max 5 pages) on decision-making,:

  • Balleine, B. W., Delgado, M. R., & Hikosaka, O. (2007). The Role of the Dorsal Striatum in Reward and Decision-Making. J. Neurosci., 27(31), 8161-8165.
  • Murray, E. A., O'Doherty, J. P., & Schoenbaum, G. (2007). What We Know and Do Not Know about the Functions of the Orbitofrontal Cortex after 20 Years of Cross-Species Studies. J. Neurosci., 27(31), 8166-8169.
  • Lee, D., Rushworth, M. F. S., Walton, M. E., Watanabe, M., & Sakagami, M. (2007). Functional Specialization of the Primate Frontal Cortex during Decision Making. J. Neurosci., 27(31), 8170-8173.
  • Knutson, B., & Bossaerts, P. (2007). Neural Antecedents of Financial Decisions. J. Neurosci., 27(31), 8174-8177.
  • Corrado, G., & Doya, K. (2007). Understanding Neural Coding through the Model-Based Analysis of Decision Making. J. Neurosci., 27(31), 8178-8180.
  • Wickens, J. R., Horvitz, J. C., Costa, R. M., & Killcross, S. (2007). Dopaminergic Mechanisms in Actions and Habits. J. Neurosci., 27(31), 8181-8183.



A basic mode of behavior: a review of reinforcement learning, from a computational and biological point of view.

The Journal Frontiers of Interdisciplinary Research in the Life Sciences (HFSP Publishing) made its first issue freely available online. The Journal specializes in "innovative interdisciplinary research at the interface between biology and the physical sciences." An excellent paper (complete, clear, exhaustive) by Kenji Doya presents a state-of-the-art review of reinforcement learning, both as a computational theory (the procedures) and a biological mechanism (neural activity). Exactly what the title announces: Reinforcement learning: Computational theory and biological mechanisms. The paper covers research in neuroscience, AI, computer science, robotics, neuroeconomics, psychology. See this nice schema of reinforcement learning in the brain:



(From the paper:) A schematic model of implementation of reinforcement learning in the cortico-basal ganglia circuit (Doya, 1999, 2000). Based on the state representation in the cortex, the striatum learns state and action value functions. The state value coding striatal neurons project to dopamine neurons, which sends the TD signal back to the striatum. The outputs of action value coding striatal neurons channel through the pallidum and the thalamus, where stochastic action selection may be realized

This stuff is exactly what a theory of natural rationality (and economics tout court): plausible, tractable, and real computational mechanism grounded in neurobiology. As Selten once said, speaking of reinforcement learning:

a theory of bounded rationality cannot avoid this basic mode of behavior (Selten, 2001, p. 16)


References



7/26/07

Special issues of NYAS on biological decision-making

The may issue of the Annals of the New York Academy of Sciences is devoted to Reward and Decision Making in Corticobasal Ganglia Networks. Many big names in decision neuroscience (Berns, Knutson, Delgado, etc.) contributed.


Introduction. Current Trends in Decision Making
Bernard W Balleine, Kenji Doya, John O'Doherty, Masamichi Sakagami

Learning about Multiple Attributes of Reward in Pavlovian Conditioning
ANDREW R DELAMATER, STEPHEN OAKESHOTT

Should I Stay or Should I Go?. Transformation of Time-Discounted Rewards in Orbitofrontal Cortex and Associated Brain Circuits
MATTHEW R ROESCH, DONNA J CALU, KATHRYN A BURKE, GEOFFREY SCHOENBAUM

Model-Based fMRI and Its Application to Reward Learning and Decision Making
JOHN P O'DOHERTY, ALAN HAMPTON, HACKJIN KIM

Splitting the Difference. How Does the Brain Code Reward Episodes?
BRIAN KNUTSON, G. ELLIOTT WIMMER

Reward-Related Responses in the Human Striatum
MAURICIO R DELGADO

Integration of Cognitive and Motivational Information in the Primate Lateral Prefrontal Cortex
MASAMICHI SAKAGAMI, MASATAKA WATANABE

Mechanisms of Reinforcement Learning and Decision Making in the Primate Dorsolateral Prefrontal Cortex
DAEYEOL LEE, HYOJUNG SEO


Resisting the Power of Temptations. The Right Prefrontal Cortex and Self-Control
DARIA KNOCH, ERNST FEHR

Adding Prediction Risk to the Theory of Reward Learning
KERSTIN PREUSCHOFF, PETER BOSSAERTS

Still at the Choice-Point. Action Selection and Initiation in Instrumental Conditioning
BERNARD W BALLEINE, SEAN B OSTLUND

Plastic Corticostriatal Circuits for Action Learning. What's Dopamine Got to Do with It?
RUI M COSTA

Striatal Contributions to Reward and Decision Making. Making Sense of Regional Variations in a Reiterated Processing Matrix
JEFFERY R WICKENS, CHRISTOPHER S BUDD, BRIAN I HYLAND, GORDON W ARBUTHNOTT

Multiple Representations of Belief States and Action Values in Corticobasal Ganglia Loops
KAZUYUKI SAMEJIMA, KENJI DOYA

Basal Ganglia Mechanisms of Reward-Oriented Eye Movement
OKIHIDE HIKOSAKA

Contextual Control of Choice Performance. Behavioral, Neurobiological, and Neurochemical Influences
JOSEPHINE E HADDON, SIMON KILLCROSS

A "Good Parent" Function of Dopamine. Transient Modulation of Learning and Performance during Early Stages of Training
JON C HORVITZ, WON YUNG CHOI, CECILE MORVAN, YANIV EYNY, PETER D BALSAM

Serotonin and the Evaluation of Future Rewards. Theory, Experiments, and Possible Neural Mechanisms
NICOLAS SCHWEIGHOFER, SAORI C TANAKA, KENJI DOYA

Receptor Theory and Biological Constraints on Value
GREGORY S BERNS, C. MONICA CAPRA, CHARLES NOUSSAIR

Reward Prediction Error Computation in the Pedunculopontine Tegmental Nucleus Neurons
YASUSHI KOBAYASHI, KEN-ICHI OKADA

A Computational Model of Craving and Obsession
A. DAVID REDISH, ADAM JOHNSON

Calculating the Cost of Acting in Frontal Cortex
MARK E WALTON, PETER H RUDEBECK, DAVID M BANNERMAN, MATTHEW F. S RUSHWORTH

Cost, Benefit, Tonic, Phasic. What Do Response Rates Tell Us about Dopamine and Motivation?
YAEL NIV



7/25/07

More than Trust: Oxytocin Increases Generosity

It was known since a couple of years that oxytocin (OT) increases trust (Kosfeld, et al., 2005): in the Trust game, players transfered more money once they inhale OT. Now recent research also suggest that it increases generosity. In a paper presented at the ESA (Economic Science Association, an empirically-oriented economics society) meeting, Stanton, Ahmadi, and Zak, (from the Center for Neuroeconomics studies) showed that Ultimatum players in the OT group offered more money (21% more) than in the placebo group--$4.86 (OT) vs. $4.03 (placebo).
They defined generosity as "an offer that exceeds the average of the MinAccept" (p.9), i.e., the minimum acceptable offer by the "responder" in the Ultimatum. In this case, offers over $2.97 were categorized as generous. Again, OT subjects displayed more generosity: the OT group offered $1.86 (80% more) over the minimum acceptable offer, while placebo subjects offered $1.03.


Interestingly, OT subjects did not turn into pure altruist: they make offers (mean $3.77) in the Dictator game similar to placebo subjects (mean $3.58, no significant difference). Thus the motive is neither direct nor indirect reciprocity (Ultimatum were blinded one-shot so there is no tit-for-tat or reputation involved here). It is not pure altruism, according to Stanton et al., (or "strong reciprocity"--see this post on the distinction between types of reciprocity) because the threat of the MinAccept compels players to make fair offers. They conclude that generosity in enhanced because OT affects empathy. Subjects simulate the perspective of the other player in the Ultimatum, but not in the Dictator. Hence, generosity "runs" on empathy: in empathizing context (Ultimatum) subjects are more generous, but in non-empathizing context they don't--in the dictator, it is not necessary to know the opponent's strategy in order to compute the optimal move, since her actions has no impact on the proposer's behavior. It would be interesting to see if there is a different OT effect in basic vs. reenactive empathy (sensorimotor vs. deliberative empathy; see this post).

Interested readers should also read Neural Substrates of Decision-Making in Economic Games, by one of the author of the study (Stanton): in her PhD Thesis, she desribes many neurpeconomic experiences.

[Anecdote: I once asked people of the ESA why they call their society like that: all presented papers were experimental, so I thought that the name should reflect the empirical nature of the conference. They replied judiscioulsy : "Because we think that it's how economics should be done"...]

References



7/23/07

The selective impairment of prosocial sentiments and the moral brain


Philosophers often describes the history of philosophy as a dispute between Plato (read: idealism/rationalism) and Aristotle (read:materialism/empiricism). It is of course extremely reductionist since many conceptual and empirical issues where not addressed in Ancient Greece, but there is a non-trivial interpretation of the history of thought according to which controversies often involves these two positions. In moral philosophy and moral psychology, however, the big figures are Hume and Kant. Is morality based on passions (Hume) or reasons (Kant)? This is another simplification, but again it frames the debate. In the last issue of Trends in Cognitive Science(TICS), three papers discusses the reason/emotions debate but provides more acute models.

Recently (see this previous post), Koenig and other collaborators (2007b) explored the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning and showed that they tend to rely a little more on a 'utilitarian' scheme (cost/benefit), and less on a deontological scheme (moral do's and don'ts ), thus suggesting that emotions are involved in moral deontological judgement. These patients, however, were also more emotional in the Ultimatum game, and rejected more offers than normal subjects. So are they emotional or not? In the first TICS paper, Moll and de Oliveira-Souza review the Koenig et al. (2007a) experiment and argue that neither somatic markers nor dual-process theory explains these findings. They propose that a selective impairment of prosocial sentiments explains why the same patient are both less emotional in moral dilemma but more emotional in economic bargaining. These patients can feel less compassion but still feel anger. In a second paper, Greene (author of the research on the trolley problems, see his homepage) challenge this interpretation and put forward his dual-process view (reason-emotion interaction). Moll and de Oliveira-Souza reply in the third paper. As you can see, there is still a debate between Kant and Hume, but cognitive neuroscience provides new tools for both sides of the debates, and maybe even a blurring of these opposites.


References



7/19/07

Beautiful picture of brain areas involved in decision-making

Found yesterday, in a paper by Sanfey (nice review paper, by the way):




"Fig. 2. Map of brain areas commonly found to be activated in decision-making studies. The sagittal section (A) shows the location of the anterior cingulate cortex (ACC), medial prefrontal cortex (MPFC), orbitofrontal cortex (OFC), nucleus accumbens (NA), and substantia nigra (SN). The lateral view (B) shows the location of the dorsolateral prefrontal cortex (DLPFC) and lateral intraparietal area (LIP). The axial section (C; cut along the white line in A and B) shows the location of the insula (INS) and basal ganglia (BG)."
from: