Natural Rationality | decision-making in the economy of nature
Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

6/24/08

Are farmers and fishermen are more holistic than herders ?

In previous work, Richard Nisbett and his collaborators suggested convincingly that Eastern and Western population have different cognitive styles (holistic vs. analytic). In a new study published in PNAS, he and his collaborators show how "ecocultural factors" (living in a farming, fishing, or herding community) influence cognitive processes.

It has been proposed that social interdependence fosters holistic cognition, that is, a tendency to attend to the broad perceptual and cognitive field, rather than to a focal object and its properties, and a tendency to reason in terms of relationships and similarities, rather than rules and categories. This hypothesis has been supported mostly by demonstrations showing that East Asians, who are relatively interdependent, reason and perceive in a more holistic fashion than do Westerners. We examined holistic cognitive tendencies in attention, categorization, and reasoning in three types of communities that belong to the same national, geographic, ethnic, and linguistic regions and yet vary in their degree of social interdependence: farming, fishing, and herding communities in Turkey's eastern Black Sea region. As predicted, members of farming and fishing communities, which emphasize harmonious social interdependence, exhibited greater holistic tendencies than members of herding communities, which emphasize individual decision making and foster social independence. Our findings have implications for how ecocultural factors may have lasting consequences on important aspects of cognition.



3/8/08

Darwin's evolutionary social psychology

While reading the chapter 5 of Darwin's The Descent of Man, I noticed that Darwin reconstruct Human evolutionary history as--forgive the anachronism--a gene-culture co-evolution. Of course, there was no concept of gene in Darwin's time, so the correct label would be "nature-culture co-evolution", but I was amazed to see how his intuitions are closed to current theories. Basically, he described our evolution as an evolutionary arms race (another anachronism) between social life and intelligence. The process goes trough 3 phases: social instinct, social intelligence, and social reasoning:

1. Social instincts: learning and sympathy

General intelligence
  • It deserves notice that, as soon as the progenitors of man became social (and this probably occurred at a very early period), the principle of imitation, and reason, and experience would have increased, and much modified the intellectual powers in a way, of which we see only traces in the lower animals.
Social instincts: sympathy, fidelity, and courage
  • In order that primeval men, or the apelike progenitors of man, should become social, they must have acquired the same instinctive feelings, which impel other animals to live in a body; and they no doubt exhibited the same general disposition. They would have felt uneasy when separated from their comrades, for whom they would have felt some degree of love; they would have warned each other of danger, and have given mutual aid in attack or defence. All this implies some degree of sympathy, fidelity, and courage.
2. Social intelligence--reciprocity and approbation

Reciprocity:
  • as the reasoning powers and foresight of the members became improved, each man would soon learn that if he aided his fellow-men, he would commonly receive aid in return. From this low motive he might acquire the habit of aiding his fellows; and the habit of performing benevolent actions certainly strengthens the feeling of sympathy which gives the first impulse to benevolent actions. Habits, moreover, followed during many generations probably tend to be inherited.
Approbation
  • [a] powerful stimulus to the development of the social virtues, is afforded by the praise and the blame of our fellow-men. primeval man, at a very remote period, was influenced by the praise and blame of his fellows. It is obvious, that the members of the same tribe would approve of conduct which appeared to them to be for the general good, and would reprobate that which appeared evil.
3. Social reasoning--norms, rules and morality
  • With increased experience and reason, man perceives the more remote consequences of his actions, and the self-regarding virtues, such as temperance, chastity, &c., which during early times are, as we have before seen, utterly disregarded, come to be highly esteemed or even held sacred.



10/24/07

Stich on Morality and Cognition

Stephen Stich, one of the most experimentally-oriented philosophers of these days, recently gave a series of talk in Paris, entitled " Moral Theory Meets Cognitive Science: How the Cognitive Science Can Transform Traditional Debates" You can watch the videos of 4 talks online:



10/4/07

A distributed conception of decision-making

In a previous post, I suggested that there is something wrong with the standard (“cogitative”) conception of decision-making in psychology. In this post, I would like to outline an alternative conception, what we might call the “distributed conception”.

A close look at robotics suggests that decision-making should not be construed as a deliberative process. Deliberative control (Mataric, 1997) or sense-model-plan-act (SMPA) architectures have been unsuccessful in controlling autonomous robots (Brooks, 1999; Pfeifer & Scheier, 1999). In these architectures, (e.g. Nilsson, 1984), “what to do?” was represented as a logical problem. Sensors or cameras represented the perceptible environment while internal processors converted sensory inputs in first-order predicate calculus. From this explicit model of its environment, the robot’s central planner transformed a symbolic description of the world into a sequence of actions (see Hu & Brady, 1996, for a survey). Decision-making was taken in charge by an expert system or a similar device. Thus the flow of information is one-way only: sensors → model → planner → effectors.

SMPA architectures could be effective, but only in environment carefully designed for the robot. The colors, lightning and objects disposition were optimally configured for simplifying perception and movement. Brooks describes how the rooms where autonomous robots evolve were optimally configured:

The walls were of a uniform color and carefully lighted, with dark rubber baseboards, making clear boundaries with the lighter colored floor. (…) The blocks and wedges were painted different colors on different planar surfaces. (….) Blocks and wedges were relatively rare in the environment, eliminating problems due to partial obscurations (Brooks, 1999, p. 62)

Thus the cogitative conception of decision-making, and its SMPA implementations, had to be abandoned. If it did not work for mobile robots, it is justified to argue that for cognitive agents in general the cogitative conception also has to be abandoned. Agents do not make decisions simply by central planning and explicit models manipulations, but by coordinating multiple sensorimotor mechanisms. In order to design robots able to imitate people, for instance, roboticists build systems that control their behavior through multiple partial models. Mataric (2002) robots, for instance, learn to imitate by coordinating the following modules:

  1. a selective attentional mechanisms that extract salient visual information (other agent's face, for instance)
  2. a sensorimotor mapping system that transforms visual input in motor program
  3. a repertoire of motor primitives
  4. a classification-based learning mechanism that learns from visuo-motor mappings

Neuroeconomics also suggests another--similar--avenue: there is no brain area, circuit or mechanisms specialized in decision-making, but rather a collection of neural modules. Certain area specializes in visual-saccadic decision-making (Platt & Glimcher, 1999). Social neuroeconomics indicates that decision in experimental games are mainly affective computations: choice behavior in these games is reliabely correlated to neural activations of social emotions such as the ‘warm glow’ of cooperation (Rilling et al., 2002), the ‘sweet taste’ of revenge (de Quervain et al., 2004) or the ‘moral disgust’ of unfairness (Sanfey et al., 2003). Subjects without affective experiences or affective anticipations are unable to make rational decision, as Damasio and his colleagues discovered. Damasio found that subjects with lesions in the ventromedial prefrontal cortex (vmPFC, a brain area above the eye sockets) had huge problems in coping with everyday tasks (Damasio, 1994). They were unable to plan meetings; they lose their money, family or social status. They were, however, completely functional in reasoning or problem-solving task. Moreover, Damasio and its collaborators found that these subjects had lower affective reactions. They did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. The researchers concluded that these subjects were unable to use emotions to aid in decision-making, a hypothesis that also implies that in normal subjects, emotions do aid in decision-making.

Consequently, the “Distributed Conception of Decision-making” suggest that making is:

Sensorimotor: the mechanisms for decision-making are not only and not necessarily intellectual, high-level and explicit. Decision-making is the whole organism’s sensorimotor control.
Situated: a decision is not a step-by-step internal computation, but also a continuous and dynamic adjustment between the agent and its environment that develop in the whole lifespan. Decision-making is always physically and (most of the time) socially situated: ecological situatedness is both a constraint on, and a set of informational resources that helps agent to cope with, decision-making.
Psychology should do more than documenting our inability to follow Bayesian reasoning in paper-and-pen experiment, but study our sensorimotor situated control capacities. Decision-making should not be a secondary topics for psychology but, following Gintis “the central organizing principle of psychology” (Gintis, 2007, p. 1). Decision-making is more than an activity we consciously engage in occasionally : it is rather the very condition of existence (as Herrnstein, said “all behaviour is choice” (Herrnstein, 1961).

Therefore, deciding should not be studied like a separate topic (e.g. perception), or an occasional activity (e.g. chess-playing) or a high-level competence (e.g. logical inference), but with robotic control. A complete, explicit model of the environment, manipulated by a central planner, is not useful for robots. New Robotics (Brooks, 1999) revealed that effective and efficient decision-making is achieved through multiple partial models updated in real-time. There is no need to integrate models in a unified representations or a common code: distributed architectures, were many processes runs in parallel, achieve better results. As Barsalou et al. (2007) argue, Cognition is coordinated non-cognition; similarly, decision-making is coordinated non-decision-making.

If decision-making is the central organizing principle of psychology, all the branches of psychology could be understood as research fields that investigate different aspects of decision-making. Abnormal psychology explains how deficient mechanisms impair decision-making. Behavioral psychology focuses on choice behavior and behavioral regularities. Cognitive psychology describes the mechanisms of valuation, goal representation, preferences and how they contribute to decision-making. Comparative psychology analyzes the variations in neural, behavioral and cognitive processes among different clades. Developmental psychology establishes the evolution of decision-making mechanisms in the lifespan. Neuropsychology identify the neural substrates of these mechanisms. Personality psychology explains interindividual variations in decision-making, our various decision-making “profiles”. Social psychology can shed lights on social decision-making, that is, either collective decision-making (when groups or institutions make decisions) or individual decision-making in social context. Finally, we could also add environmental psychology (how agents use their environment to simplify their decisions) and evolutionary psychology (how decision-making mechanisms are – or are not – adaptations).

Related posts:


References
  • Barsalou, Breazeal, & Smith. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8(2), 79-91.
  • Brooks, R. A. (1999). Cambrian Intelligence : The Early History of the New Ai. Cambridge, Mass.: MIT Press.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Damasio, A. R. (1994). Descartes' Error : Emotion, Reason, and the Human Brain. New York: Putnam.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., & Fehr, E. (2004). The Neural Basis of Altruistic Punishment. Science, 305(5688), 1254-1258.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Herrnstein, R. J. (1961). Relative and Absolute Strength of Response as a Function of Frequency of Reinforcement. J Exp Anal Behav., 4(4), 267–272.
  • Hu, H., & Brady, M. (1996). A Parallel Processing Architecture for Sensor-Based Control of Intelligent Mobile Robots. Robotics and Autonomous Systems, 17(4), 235-257.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Mataric, M. J. (1997). Behaviour-Based Control: Examples from Navigation, Learning, and Group Behaviour. Journal of Experimental & Theoretical Artificial Intelligence, 9(2 - 3), 323-336.
  • Mataric, M. J. (2002). Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. Imitation in Animals and Artifacts, 391–422.
  • Nilsson, N. J. (1984). Shakey the Robot: SRI International.
  • Pfeifer, R., & Scheier, C. (1999). Understanding Intelligence. Cambridge, Mass.: MIT Press.
  • Platt, M. L., & Glimcher, P. W. (1999). Neural Correlates of Decision Variables in Parietal Cortex. Nature, 400(6741), 238.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.



9/29/07

How is (internal) irrationality possible?

Much unhappiness (...) has less to do with not getting what we want, and more to do with not wanting what we like.(Gilbert & Wilson, 2000)

Yes, we should make choices by multiplying probabilities and utilities, but how can we possibly do this if we can’t estimate those utilities beforehand? (Gilbert, 2006)

When we preview the future and prefeel its consequences, we are soliciting advice from our ancestors. This method is ingenious but imperfect. (Gilbert, et al. 2007)


Although we easily and intuitively assess each other’s behavior, speech, or character as irrational, providing a non-trivial account of irrationality might be tricky (something we philosophers like to deal with!) Let’s distinguish internal and external assessment of rationality: an internal (or subjective) assessment of rationality is an evaluation of the coherence of intentions, actions and plans. An external (or objective) assessment of rationality is an evaluation of the effectiveness of a rule or procedure. It assesses the optimality of a rule for achieving a certain goal. An action can be rational from the first perspective but not from the second one, and vice versa. Hence subjects’ poor performance in probabilistic reasoning can be internally rational without being externally rational: the Gambler’s fallacy is and will always be a fallacy: it is possible, however, that fallacious reasoners follow rational rules, maximizing an unorthodox utility function. Consequently, it is easy to understand how one can be externally irrational, but less easy to make sense of internal irrationality.

An interesting suggestion comes from hedonic psychology, and mostly Dan Gilbert’s research: irrationality is possible if agents fail to want things they like. Gilbert research focuses on Affective Forecasting, i.e., the forecasting of one's affect (emotional state) in the future (Gilbert, 2006; Wilson & Gilbert, 2003): anticipating the affective valence, intensity, duration and nature of specific emotions. Just like Tversky and Kahneman studied biases in probabilistic reasoning, he and his collaborator study biases in hedonistic reasoning.

In many cases, for instance, people do not like or dislike an event as much as they thought they would. They want things that do not promote welfare, and not want things that would promote their welfare. This what Gilbert call “miswanting”. We miswant, explain Gilbert, because of affective forecasting biases.

Take for instance impact biases: subject overestimate the length (durability bias) or intensity (intensity bias) of future emotive states (Gilbert et al., 1998):

“Research suggests that people routinely overestimate the emotional impact of negative events ranging from professional failures and romantic breakups to electoral losses, sports defeats, and medical setbacks”. (Gilbert et al., 2004).

They also underestimate the emotional impact of positive events such as winning a lottery (Brickman et al., 1978): newly rich lottery winners rated their happiness at this stage of their life as only 4.0, (on a 6-point scale, 0 to 5) which does not differ significantly from the rating of the control subjects. Also surprising to many people is the fact that paraplegics and quadriplegics rated their lives at 3.0, which is above the midpoint of the scale (2.5). In another study, Boyd et al., (1990) solicited the utility of life with a colostomy from several different groups: patients who had rectal cancer and who had been treated by radiation, patients who had rectal cancer and who had been treated by a colostomy, physicians who had experience treating patients with gastrointestinal malignancies, and two groups of healthy individuals. The patients with a colostomy and the physicians rated life with a colostomy significantly higher than did the other three groups. Another bias is the Empathy gap: humans fail to empathize or predict correctly how they will feel in the future, i.e. what kind of emotional state they will be in. Sometimes, we fail to take into account how much our psychological “immune system” will ameliorate reactions to negative events. People do not realize how they will rationalize negative outcomes once they occur (the Immune neglect). People also often mispredict regret (Gilbert et al., 2004b):
the top six biggest regrets in life center on (in descending order) education, career, romance, parenting, the self, and leisure. (…) people's biggest regrets are a reflection of where in life they see their largest opportunities; that is, where they see tangible prospects for change, growth, and renewal. (Roese & Summerville, 2005).
So a perfectly rational agent, at time t, would choose to do X at t+1 given what she expects her future valuation of X to be. As studies showed, however, we are bad predictors of our own future subjective appreciation. The person we are at t+1 may not totally agree with the person we were at t. So, in one sense, this gives a non-trivial meaning to internal irrationality: since our affective forecasting competence is biased, we may not always choose what we like or like what we choose. Hedonic psychology might have identified incoherence between intentions, actions and plans, an internal failure in our practical rationality.

Recommended reading:



References

  • Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., & Martin-Skurski, M. E. (2006). Neurobiological Substrates of Dread. Science, 312(5774), 754-758.
  • Boyd, N. F., Sutherland, H. J., Heasman, K. Z., Tritchler, D. L., & Cummings, B. J. (1990). Whose Utilities for Decision Analysis? Med Decis Making, 10(1), 58-67.
  • Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery Winners and Accident Victims: Is Happiness Relative? J Pers Soc Psychol, 36(8), 917-927.
  • Gilbert, D. T. (2006). Stumbling on Happiness (1st ed.). New York: A.A. Knopf.
  • Gilbert, D. T., & Ebert, J. E. J. (2002). Decisions and Revisions: The Affective Forecasting of Changeable Outcomes. Journal of Personality and Social Psychology, 82(4), 503–514.
  • Gilbert, D. T., Lieberman, M. D., Morewedge, C. K., & Wilson, T. D. (2004a). The Peculiar Longevity of Things Not So Bad. Psychological Science, 15(1), 14-19.
  • Gilbert, D. T., Morewedge, C. K., Risen, J. L., & Wilson, T. D. (2004b). Looking Forward to Looking Backward. The Misprediction of Regret. Psychological Science, 15(5), 346-350.
  • Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (1998). Immune Neglect: A Source of Durability Bias in Affective Forecasting. J Pers Soc Psychol, 75(3), 617-638.
  • Gilbert, D. T., & Wilson, T. D. (2000). Miswanting: Some Problems in the Forecasting of Future Affective States. Feeling and thinking: The role of affect in social cognition, 178–197.
  • Kermer, D. A., Driver-Linn, E., Wilson, T. D., & Gilbert, D. T. (2006). Loss Aversion Is an Affective Forecasting Error. Psychological Science, 17(8), 649-653.
  • Loomes, G., & Sugden, R. (1982). Regret Theory: An Alternative Theory of Rational Choice under Uncertainty. The Economic Journal, 92(368), 805-824.
  • Roese, N. J., & Summerville, A. (2005). What We Regret Most... And Why. Personality and Social Psychology Bulletin, 31(9), 1273.
  • Seidl, C. (2002). Preference Reversal. Journal of Economic Surveys, 16(5), 621-655.
  • Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics. Journal of Socio-Economics, 31(4), 329-342.
  • Srivastava, A., Locke, E. A., & Bartol, K. M. (2001). Money and Subjective Well-Being: It's Not the Money, It's the Motives. J Pers Soc Psychol, 80(6), 959-971.
  • Thaler, A., & Tversky, R. H. (1990). Anomalies: Preference Reversals. Journal of Economic Perspectives, 4, 201-211.
  • Wilson, T. D., & Gilbert, D. T. (2003). Affective Forecasting. Advances in experimental social psychology, 35, 345-411.



9/25/07

My brain has a politics of its own: neuropolitic musing on values and signal detection

Political psychology (just as politicians and voters) identifies two species of political values: left/right, or liberalism/conservatism. Reviewing many studies, Thornhill & Fincher (2007) summarizes the cognitive style of both ideologies:

Liberals tend to be: against, skeptical of, or cynical about familiar and traditional ideology; open to new experiences; individualistic and uncompromising, pursuing a place in the world on personal terms; private; disobedient, even rebellious rulebreakers; sensation seekers and pleasure seekers, including in the frequency and diversity of sexual experiences; socially and economically egalitarian; and risk prone; furthermore, they value diversity, imagination, intellectualism, logic, and scientific progress. Conservatives exhibit the reverse in all these domains. Moreover, the felt need for order, structure, closure, family and national security, salvation, sexual restraint, and self-control, in general, as well as the effort devoted to avoidance of change, novelty, unpredictability, ambiguity, and complexity, is a well-established characteristic of conservatives. (Thornhill & Fincher, 2007).
In their paper, Thornhill & Fincher presents an evolutionary hypothesis for explaining the liberalism/conservatism ideologies: both originate from innate adaptation to attachement, parametrized by early childhood experiences. In another but related domain Lakoff (2002) argued that liberals and conservatives differs in their methaphors: both view the nation or the State as a child, but they hold different perspectives on how to raise her: the Strict Father model (conservatives) or the Nurturant Parent model (liberals); see an extensive description here). The first one

posits a traditional nuclear family, with the father having primary responsibility for supporting and protecting the family as well as the authority to set overall policy, to set strict rules for the behavior of children, and to enforce the rules [where] [s]elf-discipline, self-reliance, and respect for legitimate authority are the crucial things that children must learn.


while in the second:

Love, empathy, and nurturance are primary, and children become responsible, self-disciplined and self-reliant through being cared for, respected, and caring for others, both in their family and in their community.
In the October issue of Nature Neuroscience, a new research paper by Amodio et al. study the "neurocognitive correlates of liberalism and conservatism". The study is more modest than the title suggests. Subject were submitted to the same test, a Go/No Go task (click when you see a "W" don't click when it's a "M"). The experimenters then trained the subjects to be used to the Go stimuli; on a few occasions, they were presented with the No Go stimuli. Since they got used to the Go stimuli, the presentation of a No Go creates a cognitive conflict: balancing the fast/automatic/ vs. the slow/deliberative processing. You have to inhibit an habit in order to focus on the goal when the habit goes in the wrong direction. The idea was to study the correlation between political values and conflict monitoring. The latter is partly mediated by the anterior cingulate cortex, a brain area widely studied in neuroeconomics and decision neuroscience (see this post). EEG recording indicated that liberals' neural response to conflict were stronger when response inhibition was required. Hence liberalism is associated to a greater sensibility to response conflict, while conservatism is associated with a greater persistence in the habitual pattern. These results, say the authors, are

consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanism related to cognitive control and self-regulation
Thus valuing tradition vs. novelty, security vs. novelty might have sensorimotor counterpart, or symptoms. Of course, it does not mean that the neural basis of conservatism is identified, or the "liberal area", etc, but this study suggest how micro-tasks may help to elucidate, as the authors say in the closing sentence, "how abstract, seemingly ineffable constructs, such as ideology, are reflected in the human brain."

What this study--together with other data on conservatives and liberal--might justify is the following hypothesis: what if conservatives and liberals are natural kinds? That is, "homeostatic property clusters", (see Boyd 1991, 1999), categories of "things" formed by nature (like water, mammals, etc.), not by definition? (like supralunar objects, non-cat, grue emerald, etc.) Things that share surface properties (political beliefs and behavior) whose co-occurence can be explained by underlying mechanims (neural processing of conflict monitoring)? Maybe our evolution, as social animals, required the interplay of tradition-oriented and novelty-oriented individuals, risk-prone and risk-averse agents. But why, in the first place, evolution did not select one type over another? Here is another completely armchair hypothesis: in order to distribute, in the social body, the signal detection problem.

What kind of errors would you rather do: a false positive (you identify a signal but it's only noise) or a false negative (you think it's noise but it's a signal)? A miss or a false alarm? That is the kind of problems modeled by signal detection theory (SDT): since there is always some noise and you try to detect signal, you cannot know in advance, under radical uncertainty, what kind of policy you should stick to (risk-averse or risk-prone. "Signal" and "noise" are generic information-theoretic terms that may be related to any situation where an agent tries to find if a stimuli is present:




Is is rather ironic that signal detection theorists employ the term liberal* and conservative* (the "*" means that I am talking of SDT, not politics) to refer to different biases or criterions in signal detection. A liberal* bias is more likely to set off a positive response ( increasing the probability of false positive), whereas a conservative* bias is more likely to set off a negative response (increasing the probability of false negative). The big problem in life is that in certain domains conservatism* pay, while in others it's liberalism* who does (see Proust 2006): when identifying danger, a false negative is more expensive (better safe than sorry) whereas in looking for food a false positive can be more expensive better (better satiated than exhausted). So a robust criterion is not adaptive; but how to adjust the criterion properly? If you are an individual agent, you must altern between liberal* and conservative* criterion based on your knowledge. But if you are part of a group, liberal* and conservative* biases may be distributed: certains individuals might be more liberals* (let's send them to stand and keep watch) and other more conservatives* (let's send them foraging). Collectively, it could be a good solution (if it is enforced by norms of cooperation) to perpetual uncertainty and danger. So if our species evolved with a distribution of signal detection criterions, then we should have evolved different cognitive styles and personality traits that deal differently with uncertainty: those who favor habits, traditions, security, and the others. If liberal* and conservative* criterions are applied to other domains such as family (an institution that existed before the State), you may end up with the Strict Father model and the Nurturant Parent model; when these models are applied to political decision-making, you may end up with liberals/conservatives (no "*"). That would give a new meaning to the idea that we are, by nature, political animals.


Related posts
Links
References




9/22/07

The Stuff of Thought

Unless you live on a desert island, you might have heard of Steven Pinker's new book, The Stuff of Thought. Language as a Window into Human Nature.

If you are interested in knowing more about it, here are an excerpt online, a book review and an interview.

Last but not least, two video lectures of Pinker at Ted Talks (an amazing collection of lecture by great scholars):






" In an exclusive preview of his new book, The Stuff of Thought, Steven Pinker looks at language, and the way it expresses the workings of our minds. By analyzing common sentences and words, he shows us how, in what we say and how we say it, we're communicating much more than we realize."





"In a preview of his next book, Steven Pinker takes on violence. We live in violent times, an era of heightened warfare, genocide and senseless crime. Or so we've come to believe. Pinker charts a history of violence from Biblical times through the present, and says modern society has a little less to feel guilty about."









9/21/07

Neuroeconomics, folk-psychology, and eliminativism



conventional wisdom has long modeled our internal cognitive processes, quite wrongly, as just an inner version of the public arguments and justifications that we learn, as children, to construct and evaluate in the social space of the dinner table and the marketplace. Those social activities are of vital importance to our collective commerce, both social and intellectual, but they are an evolutionary novelty, unreflected in the brain’s basic modes of decision-making
(Churchland, 2006, p. 31).


The folk-psychological model of rationality construes rational decision-making as the product of a practical reasoning by which an agent infers, from her beliefs and desires, the right action to do. Truely, when we are asked to explain or predict actions, our intuitions lead us to describe them as the product of intentional states. In a series of studies, Malle and Knobe (Malle & Knobe, 1997, 2001) showed that folkpsychology is a language game where beliefs, desires and intentions are the main players. But using the intentional idiom does not mean that it picks out the real causes of action. This is where realist, instrumentalist and eliminativist accounts conflict. A realist account of beliefs and desires takes them to be real causal entities, an instrumentalist account treat them as useful fictions while an eliminativist account suggests that they are embedded in a faulty theory of mental functioning that should be eliminated (see Paul M. Churchland & Churchland, 1998; Dennett, 1987; Fodor, 1981). Can neuroeconomics shed light on this traditional debate in philosophy and cognitive science?

Neuroeconomics, I suggest, support an eliminativist approach of cognition. Just like contemporary chemistry does not explain combustion by a release of phlogiston (a substance supposed to exist in combustible bodies), cognitive science should stop explaining actions as the product of beliefs and desires. Behavioral regularities and neural mechanisms are sufficient to explain decision. When subjects evaluate whether or not they would buy a product, and whether or not the price seems justified, how informative is it to cite propositional attitudes as causes? The real entities involved in decision-makings are neural mechanisms involved in hedonic feelings, cognitive control, emotional modulation, conflict monitoring, planning, etc. Preferences, utility functions or practical reasoning, for instance, can explain purchasing, but they do not posit entities that can enter the “causal nexus” (Salmon, 1984). Neuroeconomics explains purchasing behavior not as an inference from beliefs-desire to action, but as a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). Prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing (Knutson et al., 2007). Hence the explanation of purchasing cites causes (brain areas) that explain the purchasing behavior as the product of a higher activation in prefrontal area and that justifies the decision to purchase: the agents had a stronger incentive to buy. A fully mechanistic account would, of course, details the algorithmic process performed by each area.
The belief-desire framework implicitly supposes that the causes of an action are those that an agent would verbally express when asked to justify her action. But on what grounds can this be justified?

Psychological and neural studies suggest rather a dissociation between the mechanisms that lead to actions and the mechanisms by which we explain them. Since Nisbett & Wilson (1977) seminal studies, research in psychology showed that the very act of explaining the intentional causes of our actions is a re-constructive process that might be faulty. Subjects give numerous reasons as to why they prefer one pair of socks (or other objects) to another, but they all prefer the last one on the right. The real explanation of their preferences is a position effect, or right-hand bias. For some reason, subjects pick the right-hand pair and, post hoc, generate an explanation for this preference, a phenomena widely observed. For instance, when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and neuroimaging studies mirrored this branding effect. Sensory information (taste) and cultural information (brand) are associated with different areas that interact so as to bias preferences. Without the label, the drink evaluation relies solely on sensory information. Subjects may motivate their preferences for one beverage over another with many diverse arguments, but the real impact on their preference is the brand’s label. The conscious narrative we produce when rationalizing our actions are not “direct pipeline[s] to nonconscious mental processes” (Wilson & Dunn, 2004, p. 507) but approximate reconstructions. When our thoughts occur before the action, when they are consistent with the action and appear as the only cause of the action, we infer that these thoughts are the causes of the actions, and rule out other internal or external causes (Wegner, 2002). But the fact that we rely on the belief-desire framework to explain our and others’ action as the product of intentional states do not constitute an argument for considering that these states are satisfying causal explanation of action.

The belief-desire framework might be a useful conceptual scheme for fast and frugal explanations, but it does not make folkpsychological constructs suitable for scientific explanation. In the same vein, if folkbiology would be the sole foundation of biology, whales would still be categorized as fish. The nature of the biological world is not explained by our (faulty and biased) folkbiology, but by making explicit the mechanism of natural selection, reproduction, cellullar growth, etc. There is no reason to believe that our folkpsychology is a better description of mental mechanisms. Beliefs, desires and intentions are folk-psychological constructs that have no counterpart in neuroscience. Motor control and action planning, for instance, are explained by different kinds of representation such as forward and inverse models, not propositional attitudes (Kawato & Wolpert, 1998; Wolpert & Kawato, 1998). Consequently, the fact that we rely on fokpsychology to explain actions does not constitute an argument for considering that this naïve theory provides reliable explanations of actions. Saying that the sun rises every morning is a good prediction, it could explains why there is more heat and light at noon, but the effectiveness of the sun-rising framework does not justifies its use as a scientific theory.

As many philosophers of science suggested, a genuine explanation is mechanistic: it consists in breaking a system in parts and process, and explaining how these parts and processes cause the system to behave the way it does (Bechtel & Abrahamsen, 2005; Craver, 2001; Machamer et al., 2000). Folkpsychology may save the phenomena, it still does not propose causal parts and processes. More generally, the problem with the belief-desire framework is that it is a description of our attitude toward things we call "agent", not a description of what constitutes the true nature of agents. Thus, it conflates the map and the territory. Moreover, conceptual advance is made when objects are described and classified according to their objective properties. A chemical theory that classifies elements according to their propensity to quench thirst would be a non-sense (although it could be useful in other context). At best, the belief-desire framework could be considered as an Everyday Handbook of Intentional Language.

References

  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.
  • Churchland, P. M. (2006). Into the Brain: Where Philosophy Should Go from Here. Topoi, 25(1), 29-32.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Craver, C. F. (2001). Role Functions, Mechanisms, and Hierarchy. Philosophy of Science, 68, 53-74.
  • Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Fodor, J. A. (1981). Representations : Philosophical Essays on the Foundations of Cognitive Science (1st MIT Press ed.). Cambridge, Mass.: MIT Press.
  • Kawato, M., & Wolpert, D. M. (1998). Internal Models for Motor Control. Novartis Found Symp, 218, 291-304; discussion 304-297.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking About Mechanisms. Philosophy of Science, 67, 1-24.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Malle, B. F., & Knobe, J. (2001). The Distinction between Desire and Intention: A Folk-Conceptual Analysis. In B. F. M. L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 45-67). Cambridge, MA: MIT Press.
  • McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84, 231-259.
  • Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the World. Princeton, N.J.: Princeton University Press.
  • Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
  • Wilson, T. D., & Dunn, E. W. (2004). Self-Knowledge: Its Limits, Value, and Potential for Improvement. Annual Review of Psychology, 55(1), 493-518.
  • Wolpert, D. M., & Kawato, M. (1998). Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11(7-8), 1317.



9/14/07

New paper (in French): Des lois de la pensée au cerveau-machine.

Commentaires bienvenus ;-) !

Hardy-Vallée, B. (forthcoming) Des lois de la pensée au cerveau-machine. In Informatique et Sciences Cognitives : Influences ou Confluences?, Paris, Ophrys/MSH, D. Kayser et C. Garbay (eds).



8/28/07

The Political Brain

A book review in the NYT of Drew Westen's new book, "The Political Brain".

Stop Making Sense

Published: August 26, 2007

Between 2000 and 2006, a specter haunted the community of fundamentalist Democrats. Members of this community looked around and observed their moral and intellectual superiority. They observed that their policies were better for the middle classes. And yet the middle classes did not support Democrats. They tended to vote, in large numbers, for the morally and intellectually inferior party, the one, moreover, that catered to the interests of the rich.

How could this be?

Serious thinkers set to work, and produced a long shelf of books answering this question. Their answers tended to rely on similar themes. First, Democrats lose because they are too intelligent. Their arguments are too complicated for American voters. Second, Democrats lose because they are too tolerant. They refuse to cater to racism and hatred. Finally, Democrats lose because they are not good at the dark art of politics. Republicans, though they are knuckle-dragging simpletons when it comes to policy, are devilishly clever when it comes to electioneering. They have brilliant political consultants like Lee Atwater and Karl Rove, who frame issues so fiendishly, they can fool the American people into voting against their own best interests. (READ MORE)




8/22/07

Just (don't) do it: The neural correlate of the veto process

A study published in the new edition of the Journal of Neuroscience proposes that the dorsal fronto-median cortex (dFMC) is primarily involved in the inhibition of intentional action. Subjects had to inhibit a simple decision: choosing when to execute a simple key press while observing a rotating clock hand (the design is hence analogue to the famous Ben Libet's experiment on free will where he found out that "subjects perceived the intention to press as occurring before a conscious experience of actually moving". The difference being that this time, researchers have fMRI data (and not just EEG recording) and that subjects must choose and then inhibit. So here is that small piece of gray matter that inhibit you behavior:




(from Brass & Haggard, 2007)

Interestingly, their finding also suggest a top-down mechanisms for action inhibition:

Cognitive models of inhibition have focused on inhibition of prepotent responses to external stimuli (Logan et al., 1984; Cohen et al., 1990). An important distinction is made between "lateral" competitive interaction between alternative representations at a single level (Rumelhart and McClelland, 1986) and inhibitory top-down control signals from hierarchically higher brain areas (Norman and Shallice, 1986). The first idea would be consistent with a general decision process being involved. If the dFMC decides between action and inhibition by a competitive interaction process, then representations corresponding to the possibilities of action and to non-action should initially both be active, leading to activation in both action trials and inhibition trials. Our finding of minimal dFMC activation in action trials (...) argues against a view of endogenous inhibition based on competitive interaction between alternatives and thus is also not consistent with the idea of the dFMC being involved in a general decision process. In contrast, our result is consistent with a specific top-down control signal gating the neural pathways linking intention to action. This view is supported by the negative correlation between dFMC activation and primary motor cortex activation.



References



8/18/07

The logic of investment and the logic of obligation in Maasai culture

In the last post, I reported a study that shows how a single sentence can promote fairness in the Dictaror game. Now another study shows how a single sentence reduces trust and fairness in the Trust Game. In this game, DM1s (Decision-Makers one) decide how much, if any, of a certain amount of money they would like to transfer to DM2s. The transferred money is tripled, and DM2s decide how much, if any, they give to DM1s. People tend to transfer and reciprocate a considerable amount of money, although orthodox game theory predicts that they would not.

Lee Cronk had Kenyan Maa-speaking individuals play the trust game. Half of them where not framed, while the other ones were informed that 'This is an osotua game." Osotua (literally, "umbilical cord") , in Maasai, refers to certain gift-giving relationships:


Osotua relationships are started in many ways, but they usually begin with a request for a gift or favor. Such requests arise from genuine need and are limited to the amount actually needed. Gifts given in response to such requests are given freely (pesho) and from the heart (ltau) but, like the requests, are limited to what is actually needed. Because the economy is based on livestock, many osotua gifts take that form, but virtually any good or service may serve as an osotua gift. Once osotua is established, it is pervasive in the sense that one cannot get away from it. Osotua is also eternal. Once established, it cannot be destroyed, even if individuals who established the relationship die.

So what happened when the trust game was presented as Osotua? Players transferred less money than in the regular, non-framed situation. Also, there is a correlation between the amount that DM1s were given and how much they expected to receive in the unframed, but not in the framed condition. The framing, according to Cronk,

shifts game play away from the logic of investment and towards the mutual obligation of isotuatin to respond to one another's genuine needs, but only with what is genuinely needed.
Hence a simple cue can trigger a complete shift in perspective and alter player's behavior.








8/9/07

“Note that he relies on you”; how a single sentence enhances altruism in a Dictator game

In a recent study, experimental economist Pablo Branas-Garza showed that a single sentence is enough to promote fairness. He conducted two experiments, one in a classroom, the other being a regular economic experiments, where subjects had to play a Dictator Game. In every experiments, there was a baseline condition (subjects where presented with a description of the game), and a framing condition: at the end of the text, a sentence reads “Note that your opponent relies on you”. Results: adding that sentence increased donations. As fig. 1 shows, the framing boosts altruism and reduces selfishness: low offers are much rarer.




fig. 1, from Branas-Garza, 2007.



What is surprising is not that subjects are sensible to certain moral-social cues, but that such a simple cue (7 words) is sufficient. The more we know about each others, the less selfish we are.




8/1/07

A basic mode of behavior: a review of reinforcement learning, from a computational and biological point of view.

The Journal Frontiers of Interdisciplinary Research in the Life Sciences (HFSP Publishing) made its first issue freely available online. The Journal specializes in "innovative interdisciplinary research at the interface between biology and the physical sciences." An excellent paper (complete, clear, exhaustive) by Kenji Doya presents a state-of-the-art review of reinforcement learning, both as a computational theory (the procedures) and a biological mechanism (neural activity). Exactly what the title announces: Reinforcement learning: Computational theory and biological mechanisms. The paper covers research in neuroscience, AI, computer science, robotics, neuroeconomics, psychology. See this nice schema of reinforcement learning in the brain:



(From the paper:) A schematic model of implementation of reinforcement learning in the cortico-basal ganglia circuit (Doya, 1999, 2000). Based on the state representation in the cortex, the striatum learns state and action value functions. The state value coding striatal neurons project to dopamine neurons, which sends the TD signal back to the striatum. The outputs of action value coding striatal neurons channel through the pallidum and the thalamus, where stochastic action selection may be realized

This stuff is exactly what a theory of natural rationality (and economics tout court): plausible, tractable, and real computational mechanism grounded in neurobiology. As Selten once said, speaking of reinforcement learning:

a theory of bounded rationality cannot avoid this basic mode of behavior (Selten, 2001, p. 16)


References



7/31/07

Understanding two models of fairness: outcome-based inequity aversion vs. intention-based reciprocity

Why are people fair? Theoretical economics provides two generic models that fits the data. According to the first, inequity aversion, people are inequity-averse: they don't like a situation where one agent is disadvantaged over another. This model is based on consequences. The other model is based on intentions: although the consequences of an action are important, what matters here is the intention that motivates the action. I won't discuss which approach is better (it is an ongoing debates in economics), but I just wanted to share with a extremely clear presentations of the two parties, found in van Winden, F. (2007). Affect and Fairness in Economics. Social Justice Research, 20(1), 35-52., on pages 38-39:

In inequity aversion models (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999), which focus on the outcomes or payoffs of social interactions, any deviation between an individual's payoff and the equitable payoff (e.g., the mean payoff or the opponent's payoff) is supposed to be negatively valued by that individual. More formally, the crucial difference between an outcome-based inequity aversion model and the homo economicus model is that, in addition to the argument representing the individual's own payoff, a new argument is inserted in the utility function showing the individual's inequity aversion (social preferences), as in the social utility model (see, e.g., Handgraaf et al., 2003; Loewenstein et al., 1989; Messick and Sentis, 1985). The individual is then assumed to maximize this adapted utility function.

In intention-based reciprocity models it is not the outcomes of the interaction as such that matter, but the intentions of the players (Rabin, 1993; see also Falk and Fischbacher, 2006). The idea is that people want to reciprocate perceived (un)kindness with (un)kindness, because this increases their utility. Obviously, beliefs play a crucial role here. More formally, in this case, in addition to an individual's own payoff a new argument is inserted in the utility function incorporating the assumed reciprocity motive. As a consequence, if someone is perceived as being kind it increases the individual's utility to reciprocate with being kind to this other person. Similarly, if the other is believed to be unkind, the individual is better off by being unkind as well, because this adds to her or his utility. Again, this adapted utility function is assumed to be maximized by the individual.





7/30/07

A glimpse at the evolution of the fearing and trusting brain

Together with other mechanisms, the amygdala is involveld in a complex neural circuitry that transforms photons hitting your eyes into the feeling that "Mom is mad at me because I break her favorite vase". Often referred to as the fear center, the amygdala is more like an online supervisory system that sets levels of alert. Many of its activities are of a social nature. Explicit and implicit distrust of faces elicits amygdala activation (Winston et al., 2002), while trust is increased with amygdala impairment (Adolphs et al., 1998). Moreover, the trust enhancing effect of oxytocin is mediated by amygdalar modulation: oxytocin reduces fear and hence allows trusting. In a nutshell, emotional memorization, learning and modulation performed by the amygdala obeys the following flowchart:


(from Schumann 1998)

A subpart of the amygdala, the lateral nucleus, processes information about social stimuli (such as facial expression). Autistic individuals tend to have impaired lateral nucleus, which makes sense if this nucleus is an important social-cognitive device (autistic subjects perform poorly in task that involves mental states attribution or other social inferences).According to Emery and Amaral (2000), inputs form the visual neocortex cortex enters the amygdala through the lateral nucleus, where its "emotional meaning" is attributed (I know, it is simplification...); the basal nucleus adds information about the social context. Hence this nucleus acts as a sensory integrator (LeDoux, 2000).

In a new paper in American Journal of Physical Anthropology, Barger et al. studied the relative size of different nuclei of the amygdala in different primates (humans, chimpanzee, bonobo, gorilla, etc.). The study revealed that the human lateral nucleus represents a larger fraction of the amygdala:






The authors conclude:

The large size of the human L [lateral nuclei] may reflect the proliferation of the temporal lobe over the course of hominid evolution, while the inverse may be true of the gorilla. The smaller size of the orangutan AC [amygdaloid complex] and BLD [Baso-lateral division] may be related to the diminished importance of interconnected limbic structures in this species. Further, there is some evidence that the orangutan, which exhibits one of the smallest group sizes on the contin- uum of primate sociality, may also be distinguished neuroanatomically from the other great apes, suggesting that social pressures may play a role in the development of the AC in association with other limbic regions.
Living in large groups thus may have shaped the evolution of emotional processing capacities of our brains. In the economy of nature, negotiating our way in a complex social world requires accute and specialized cognitive capacities in order to cooperate, trust, reciprocate, etc. This research show the potentials of evolutionary cognitive neuroscience (see this post).


References


  • Adolphs R, Tranel D, Damasio AR (1998) The human amygdala in social judgment. Nature 393: 470-474.
  • Barger, N. Stefanacci, L., & Semendeferi, K. (2007) A comparative volumetric analysis of the amygdaloid complex and basolateral division in the human and ape brain. American Journal of Physical Anthropology.
  • Emery and Amaral, 2000 N.J. Emery and D.G. Amaral, The role of amygdala in primate social cognition. In: R.D. Lane and L. Nadel, Editors, Cognitive Neuroscience of Emotion, Oxford Univ. Press, New York (2000), pp. 156–191.
  • LeDoux JE (2000) Emotion circuits in the brain. Annu Rev Neurosci 23: 155-184
  • Schumann, J.A. 1998. Language Learning. Vol. 48 Issue s1 Page ix-326
  • Winslow JT, Insel TR (2004) Neuroendocrine basis of social recognition. Curr Opin Neurobiol 14: 248-253



7/27/07

The moral stance: a brief introduction to the Knobe effect and similar phenomena

An important discovery in the new field of Experimental Philosophy (or "x-phi", i.e., "using experimental methods to figure out what people really think about particular hypothetical cases" -Online dictionary of philosophy of mind) is the importance of moral beliefs in intentional action attribution. Contrast these two cases:

[A] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was harmed.
[B] The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’ The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.
A and B are identical, except that in one case the program harms the environment and in the other case it helps it. Subjects were asked whether the chairman of the board intentionally harm (A) or help (B) the environment. Since the two cases have the same belief-desire structure, both actions should be seen as intentional, whether it is right or wrong. It turns out that in the "harm" version, most people (82%) say that the chairman intentionally harm the environment; in the "help" version, only 23% say that the chairman intentionally help the environment. This effect is call the "Knobe effect", because it was discovered by philosopher Joshua Knobe. In a nutshell it means that

people’s beliefs about the moral status of a behavior have some influence on their intuitions about whether or not the behavior was performed intentionally (Knobe, 2006, p. 207)
Knobe replicated the experiment with different populations and methodology and the result is robust (see his 2006, for a review). It seems that humans are more prone to think that someone is responsible for an an action if the outcome is morally wrong. Contrary to common wisdom, the folk-psychological concept of intentional action does not--or not primarily--aim at explaining and predicting action, but at attributing praise and blame. There is something morally normative to saying that "A does X".

A related post on the x-phi blog by Sven Nyholm describes a similar experiment. The focus was not intention, but happiness. The two versions were

[A] Richard is a doctor working in a Red Cross field hospital, overseeing and carrying out medical treatment of victims of an ongoing war. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or wounded in the war don’t deserve to die, and that their well-being is of great importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
[B] Richard is a doctor working in a Nazi death camp, overseeing and carrying out executions and nonconsensual, painful medical experiments on human beings. He sometimes gets pleasure from this, but equally often the deaths and human suffering get to him and upset him. However, Richard is convinced that this is an important and crucial thing that he has to do. Richard therefore feels a strong sense of satisfaction and fulfillment when he thinks about what he is doing. He thinks that the people who are being killed or experimented on don’t deserve to live, and that their well-being is of no importance. And so he wants to continue what he is doing even though he sometimes finds it very upsetting.
Subjects were asked whether they agreed or disagreed with the sentence "Richard is happy" (on a scale from 1=disagree to 7=agree). Subjects slightly agrees (4.6/7) in the morally good condition (A) whereas they slightly disagrees (3.5/7) in the morally bad condition (B), and the difference is statistically significant. Again, the concept of "being happy" is partly moral-normative.

A related phenomena has been observed in an experimental study of generosity recently published: generous behavior is also influenced by moral-normative beliefs (Fong, 2007). In this experiment, donors had to decide how much of a $10 dollars "pie" they want to transfer to a real-life welfare recipient (and keep the rest: thus it is a Dictator game). They read information about the recipients (who had to fill a questionnaire before). They we asked about their age, race, gender, etc. The three recipients had a similar profile, except for their motivation to work. In the last three questions:

  • If you don't work full-time, are you looking for more work? ______Yes, I am looking for more work. ______No, I am not looking for more work.
  • If it were up to you, would you like to work full-time? ______Yes, I would like to work full-time. ______No, I would not like to work full-time.
  • During the last five years, have you held one job for more than a one-year period? Yes_____ No_____
one replied Yes to all ("industrious"), one replied No ("lazy"), and the other did not reply ("low-information"). Donors made their decision and money was transferred for real (btw, that's one thing I like about experimental economics: there is no deceptions, no as-if: real people receive real money). Results:

Lazy-recipient, low-information-recipient, and industrious-recipient received an average of $1.84, $3.21, and $2.79, respectively. The ant and the grasshopper! (" You sang! I'm at ease/
For it's plain at a glance/Now, ma'am, you must dance."). As the author says:

Attitudinal measures of beliefs that the recipient in the experiment was poor because of bad luck rather than laziness – instrumented by both prior beliefs that bad luck rather than laziness causes poverty and randomly provided direct information about the recipient's attachment to the labour force – have large and very robust positive effects on offers

[In another research paper Pr. Fong also found different biases in giving to Katrina victims.]

An interesting--and surprising--finding of this study is also that this "ant effect" ("you should deserve help") was stronger in people who scored higher of humanitarianism beliefs. They don't give more than others when recipients are deemed to be poor because of laziness (another reason not to trust what people say about themselves, and look at their behavior instead). Again, a strong moral-normative effect on beliefs and behavior. Since oxytocin increases generosity (see this post) and this effect is due to a greater empathy induced by oxytocin, I am curious to see if people in the lazy vs. industrious experiment after oxytocin inhalation would become more sensible to the origin of poverty (bad luck or lazyness). If bad luck inspires more empathy, then I guess yes.

Man the moral animal?

Morality seems to be deeply entrenched in our social-cognitive mechanims. One way to understand all these results is to posit that we routinely and usually interpret each other from the "moral stance", not the intentional one. The "intentional stance", as every Philosophy of mind 101 course teaches us, is the perspective we adopt when we deal with intentional agents (agents who entertain beliefs and desires). We explain and predict action based on their rationality and the mental representations they should have, given the circumstances. In other words, it's the basic toolkit for being a game-theoretic agent. Philosophers (Dennett in particular) contrast this stance with the physical and the design stance (when we talk about an apple that falls or the function of the "Ctrl" key in a computer, for instance). I think we should introduce a related stance, the moral stance. Maybe--but research will tell us--this stance is more basic. We switch to the purely intentional stance when, for instance, we interact with computer in experimental games. Remember how subjects don't care about being cheated by a computer in the Ultimatum Game: they have no aversive feeling (i.e. insular activation) when the computer make an unfair offer (see a discussion in this paper by Sardjevéladzé and Machery). Hence they don't use the "moral stance", but they still use the intentional stance. Another possibility is that the moral stance might explains why people deviate from standard game-theoretical predictions: all these predictions are based on intentional-stance functionalism. This stance applies more to animals, psychopaths or machines than to normal human beings. And also to groups: in many games, such as the Ultimatum of the Centipede, groups behave more "rationally" than individuals (see Bornstein et al., 2004; Bornstein & Yaniv, 1998; Cox & Hayne, 2006), that is, they are closer to game-theoretic behavior (a point radically develop in the movie The Corporation: firms lack moral qualities). Hence the moral stance may have particular requirements (individuality, emotions, empathy, etc.).


References:
Links:



7/26/07

Special issues of NYAS on biological decision-making

The may issue of the Annals of the New York Academy of Sciences is devoted to Reward and Decision Making in Corticobasal Ganglia Networks. Many big names in decision neuroscience (Berns, Knutson, Delgado, etc.) contributed.


Introduction. Current Trends in Decision Making
Bernard W Balleine, Kenji Doya, John O'Doherty, Masamichi Sakagami

Learning about Multiple Attributes of Reward in Pavlovian Conditioning
ANDREW R DELAMATER, STEPHEN OAKESHOTT

Should I Stay or Should I Go?. Transformation of Time-Discounted Rewards in Orbitofrontal Cortex and Associated Brain Circuits
MATTHEW R ROESCH, DONNA J CALU, KATHRYN A BURKE, GEOFFREY SCHOENBAUM

Model-Based fMRI and Its Application to Reward Learning and Decision Making
JOHN P O'DOHERTY, ALAN HAMPTON, HACKJIN KIM

Splitting the Difference. How Does the Brain Code Reward Episodes?
BRIAN KNUTSON, G. ELLIOTT WIMMER

Reward-Related Responses in the Human Striatum
MAURICIO R DELGADO

Integration of Cognitive and Motivational Information in the Primate Lateral Prefrontal Cortex
MASAMICHI SAKAGAMI, MASATAKA WATANABE

Mechanisms of Reinforcement Learning and Decision Making in the Primate Dorsolateral Prefrontal Cortex
DAEYEOL LEE, HYOJUNG SEO


Resisting the Power of Temptations. The Right Prefrontal Cortex and Self-Control
DARIA KNOCH, ERNST FEHR

Adding Prediction Risk to the Theory of Reward Learning
KERSTIN PREUSCHOFF, PETER BOSSAERTS

Still at the Choice-Point. Action Selection and Initiation in Instrumental Conditioning
BERNARD W BALLEINE, SEAN B OSTLUND

Plastic Corticostriatal Circuits for Action Learning. What's Dopamine Got to Do with It?
RUI M COSTA

Striatal Contributions to Reward and Decision Making. Making Sense of Regional Variations in a Reiterated Processing Matrix
JEFFERY R WICKENS, CHRISTOPHER S BUDD, BRIAN I HYLAND, GORDON W ARBUTHNOTT

Multiple Representations of Belief States and Action Values in Corticobasal Ganglia Loops
KAZUYUKI SAMEJIMA, KENJI DOYA

Basal Ganglia Mechanisms of Reward-Oriented Eye Movement
OKIHIDE HIKOSAKA

Contextual Control of Choice Performance. Behavioral, Neurobiological, and Neurochemical Influences
JOSEPHINE E HADDON, SIMON KILLCROSS

A "Good Parent" Function of Dopamine. Transient Modulation of Learning and Performance during Early Stages of Training
JON C HORVITZ, WON YUNG CHOI, CECILE MORVAN, YANIV EYNY, PETER D BALSAM

Serotonin and the Evaluation of Future Rewards. Theory, Experiments, and Possible Neural Mechanisms
NICOLAS SCHWEIGHOFER, SAORI C TANAKA, KENJI DOYA

Receptor Theory and Biological Constraints on Value
GREGORY S BERNS, C. MONICA CAPRA, CHARLES NOUSSAIR

Reward Prediction Error Computation in the Pedunculopontine Tegmental Nucleus Neurons
YASUSHI KOBAYASHI, KEN-ICHI OKADA

A Computational Model of Craving and Obsession
A. DAVID REDISH, ADAM JOHNSON

Calculating the Cost of Acting in Frontal Cortex
MARK E WALTON, PETER H RUDEBECK, DAVID M BANNERMAN, MATTHEW F. S RUSHWORTH

Cost, Benefit, Tonic, Phasic. What Do Response Rates Tell Us about Dopamine and Motivation?
YAEL NIV