Natural Rationality | decision-making in the economy of nature
Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

2/20/08

The Psychology of Moral Reasoning

In the last issue of Judgment and Decision-Making (Volume 3, Number 2, February 2008),

Monica Bucciarelli, Sangeet Khemlani, and P. N. Johnson-Laird (well-known for his research on mental models in reasoning) present an account of the psychology of moral reasoning (pdf - html). It is based on many experiments and gives a large place to deontic reasoning (often neglected in current moral psychology) and constrasts sharply with many sentimentalist accounts of morality (according to which moral judgment is mainly emotional). Their main findings are :

  1. Indefinability of moral propositions: No simple criterion exists to tell from a proposition alone whether or not it concerns morals as opposed to some other deontic matter, such as a convention, a game, or good manners.
  2. Independent systems: Emotions and deontic evaluations are based on independent systems operating in parallel.
  3. Deontic reasoning: all deontic evaluations, including those concerning morality, depend on inferences, either unconscious intuitions or conscious reasoning.
  4. Moral inconsistency: the beliefs that are the basis of moral intuitions and conscious moral reasoning are neither complete nor consistent.



12/12/07

Two New Papers on Natural Rationality

Hardy-Vallée, B. (forthcoming). Decision-Making in the Economy of Nature: Information as Value. In G. Terzis & R. Arp (Eds.), Information and Living Systems: Essays in Philosophy of Biology. Cambridge, MA: MIT Press.

This chapter analyzes and discusses one of the most important uses of information in the biological world: decision-making. I will first present a fundamental principle introduced by Darwin, the idea of an “economy of nature,” by which decision-making can be understood. Following this principle, I then argue that biological decision-making should be construed as goal-oriented, value-based information processing. I propose a value-based account of neural information, where information is primarily economic and relative to goal achievement. If living beings (I focus here on animals) are biological decision-makers, we may expect that their behavior would be coherent with the pursuit of certain goals (either ultimate or instrumental) and that their behavioral control mechanisms would be endowed with goal-directed and valuation mechanisms. These expectations, I argue, are supported by behavioral ecology and decision neuroscience. Together, they provide a rich, biological account of decision-making that should be integrated in a wider concept of ‘natural rationality’.


Hardy-Vallee B. (submitted) Natural Rationality and the Psychology of Decision: Beyond bounded and ecological rationality

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. It pictures decision-making mostly as a deliberative task and rationality as a matter of idealization. This conception also suggests that psychology should either document human failures to comply with rational-choice standards (bounded rationality) or detail how mental mechanisms are ecologically rational (ecological rationality). This conception, I argue, runs into many problems: descriptive (section 2), conceptual (section 3) and normative (section 4). I suggest that psychology and philosophy need another—wider—conception of rationality, that goes beyond bounded and ecological rationality (section 5).



11/13/07

Decision-Making in Robotics and Psychology: A Distributed Account

Forthcoming a special issue of New Ideas in Psychology on Cognitive Robotics & Theoretical Psychology, edited by Tom Ziemke & Mark Bickhard:

Hardy-Vallée, B. (in press). Decision-Making in Robotics and Psychology: A Distributed Account. New Ideas in Psychology

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. The psychological study of decision-making assumes a certain conception of its nature and mechanisms that has been shown wrong by research in robotics. Robotics indicates that decision-making is not—or at least not only—an intellectual task, but also a process of dynamic behavioral control, mediated by embodied and situated sensorimotor interaction. The implications of this conception for psychology are discussed.
[PDF]



10/23/07

Kinds of Philosophical Moral Psychologies

Usually, moral philosophy oscillate between Hume and Kant; emotional utilitarism and rational deontologism. Hauser, in Moral Minds, add another perspective, a "rawlsian" one. I found a nice graphical depiction of these models:




"event perception triggers an analysis of the causal and intentional properties underlying the relevant actions and their consequences. This analysis triggers, in turn, a moral judgment that will, most likely, trigger the systems of emotion and conscious reasoning. The single most important difference between the Rawlsian model and the other two is that emotions and conscious reasoning follow from the moral judgment as opposed to being causally responsible for them."

- Hauser, M. D. (2006). The liver and the moral organ. Soc Cogn Affect Neurosci, 1(3), 214-220.


Also, in "The Case for Nietzschean Moral Psychology", Knobe et Leiter constrast Arisotle, Kant and Nietzsche on moral psychology. It turns out that human are more Nietzschean than we thought!

Anyone knows another great philosopher who gave his/her name to a moral psychology?



10/20/07

Gains and Losses in the Brain

A new lesion study that suggests a dissociation between the neural processsing of gains and losses:

We found that individuals with lesions to the amygdala, an area responsible for processing emotional responses, displayed impaired decision making when considering potential gains, but not when considering potential losses. In contrast, patients with damage to the ventromedial prefrontal cortex, an area responsible for integrating cognitive and emotional information, showed deficits in both domains. We argue that this dissociation provides evidence that adaptive decision making for risks involving potential losses may be more difficult to disrupt than adaptive decision making for risks involving potential gains


Weller, J. A., Levin, I. P., Shiv, B., & Bechara, A. (2007). Neural Correlates of Adaptive Decision Making for Risky Gains and Losses. Psychological Science, 18(11), 958-964.



10/16/07

[PNAS] Why Sex is Good, and The Evolutionary Psychology of Animate Perception

in this week PNAS:

A glimpse at the evolutionary psychology of animal perception, by New, Cosmides and Tooby (famous evolutionary psychologists):
Visual attention mechanisms are known to select information to process based on current goals, personal relevance, and lower-level features. Here we present evidence that human visual attention also includes a high-level category-specialized system that monitors animals in an ongoing manner. Exposed to alternations between complex natural scenes and duplicates with a single change (a change-detection paradigm), subjects are substantially faster and more accurate at detecting changes in animals relative to changes in all tested categories of inanimate objects, even vehicles, which they have been trained for years to monitor for sudden life-or-death changes in trajectory. This animate monitoring bias could not be accounted for by differences in lower-level visual characteristics, how interesting the target objects were, experience, or expertise, implicating mechanisms that evolved to direct attention differentially to objects by virtue of their membership in ancestrally important categories, regardless of their current utility.

And the reason why sex makes people feeling good: it's all oxytocyn! Waldherr and Neumann showed that "sexual activity and mating with a receptive female reduce the level of anxiety and increase risk-taking behavior in male rats for several hours" (!) because "oxytocin is released within the brain of male rats during mating with a receptive female"



10/12/07

A roundup of the most popular posts

According to the stats, the 5 mots popular posts on Natural Rationality are:

  1. Strong reciprocity, altruism and egoism
  2. What is Wrong with the Psychology of Decision-Making?
  3. My brain has a politics of its own: neuropolitic musing on values and signal detection
  4. Rational performance and behavioral ecology
  5. Natural Rationality for Newbies

Enjoy!



10/4/07

A distributed conception of decision-making

In a previous post, I suggested that there is something wrong with the standard (“cogitative”) conception of decision-making in psychology. In this post, I would like to outline an alternative conception, what we might call the “distributed conception”.

A close look at robotics suggests that decision-making should not be construed as a deliberative process. Deliberative control (Mataric, 1997) or sense-model-plan-act (SMPA) architectures have been unsuccessful in controlling autonomous robots (Brooks, 1999; Pfeifer & Scheier, 1999). In these architectures, (e.g. Nilsson, 1984), “what to do?” was represented as a logical problem. Sensors or cameras represented the perceptible environment while internal processors converted sensory inputs in first-order predicate calculus. From this explicit model of its environment, the robot’s central planner transformed a symbolic description of the world into a sequence of actions (see Hu & Brady, 1996, for a survey). Decision-making was taken in charge by an expert system or a similar device. Thus the flow of information is one-way only: sensors → model → planner → effectors.

SMPA architectures could be effective, but only in environment carefully designed for the robot. The colors, lightning and objects disposition were optimally configured for simplifying perception and movement. Brooks describes how the rooms where autonomous robots evolve were optimally configured:

The walls were of a uniform color and carefully lighted, with dark rubber baseboards, making clear boundaries with the lighter colored floor. (…) The blocks and wedges were painted different colors on different planar surfaces. (….) Blocks and wedges were relatively rare in the environment, eliminating problems due to partial obscurations (Brooks, 1999, p. 62)

Thus the cogitative conception of decision-making, and its SMPA implementations, had to be abandoned. If it did not work for mobile robots, it is justified to argue that for cognitive agents in general the cogitative conception also has to be abandoned. Agents do not make decisions simply by central planning and explicit models manipulations, but by coordinating multiple sensorimotor mechanisms. In order to design robots able to imitate people, for instance, roboticists build systems that control their behavior through multiple partial models. Mataric (2002) robots, for instance, learn to imitate by coordinating the following modules:

  1. a selective attentional mechanisms that extract salient visual information (other agent's face, for instance)
  2. a sensorimotor mapping system that transforms visual input in motor program
  3. a repertoire of motor primitives
  4. a classification-based learning mechanism that learns from visuo-motor mappings

Neuroeconomics also suggests another--similar--avenue: there is no brain area, circuit or mechanisms specialized in decision-making, but rather a collection of neural modules. Certain area specializes in visual-saccadic decision-making (Platt & Glimcher, 1999). Social neuroeconomics indicates that decision in experimental games are mainly affective computations: choice behavior in these games is reliabely correlated to neural activations of social emotions such as the ‘warm glow’ of cooperation (Rilling et al., 2002), the ‘sweet taste’ of revenge (de Quervain et al., 2004) or the ‘moral disgust’ of unfairness (Sanfey et al., 2003). Subjects without affective experiences or affective anticipations are unable to make rational decision, as Damasio and his colleagues discovered. Damasio found that subjects with lesions in the ventromedial prefrontal cortex (vmPFC, a brain area above the eye sockets) had huge problems in coping with everyday tasks (Damasio, 1994). They were unable to plan meetings; they lose their money, family or social status. They were, however, completely functional in reasoning or problem-solving task. Moreover, Damasio and its collaborators found that these subjects had lower affective reactions. They did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. The researchers concluded that these subjects were unable to use emotions to aid in decision-making, a hypothesis that also implies that in normal subjects, emotions do aid in decision-making.

Consequently, the “Distributed Conception of Decision-making” suggest that making is:

Sensorimotor: the mechanisms for decision-making are not only and not necessarily intellectual, high-level and explicit. Decision-making is the whole organism’s sensorimotor control.
Situated: a decision is not a step-by-step internal computation, but also a continuous and dynamic adjustment between the agent and its environment that develop in the whole lifespan. Decision-making is always physically and (most of the time) socially situated: ecological situatedness is both a constraint on, and a set of informational resources that helps agent to cope with, decision-making.
Psychology should do more than documenting our inability to follow Bayesian reasoning in paper-and-pen experiment, but study our sensorimotor situated control capacities. Decision-making should not be a secondary topics for psychology but, following Gintis “the central organizing principle of psychology” (Gintis, 2007, p. 1). Decision-making is more than an activity we consciously engage in occasionally : it is rather the very condition of existence (as Herrnstein, said “all behaviour is choice” (Herrnstein, 1961).

Therefore, deciding should not be studied like a separate topic (e.g. perception), or an occasional activity (e.g. chess-playing) or a high-level competence (e.g. logical inference), but with robotic control. A complete, explicit model of the environment, manipulated by a central planner, is not useful for robots. New Robotics (Brooks, 1999) revealed that effective and efficient decision-making is achieved through multiple partial models updated in real-time. There is no need to integrate models in a unified representations or a common code: distributed architectures, were many processes runs in parallel, achieve better results. As Barsalou et al. (2007) argue, Cognition is coordinated non-cognition; similarly, decision-making is coordinated non-decision-making.

If decision-making is the central organizing principle of psychology, all the branches of psychology could be understood as research fields that investigate different aspects of decision-making. Abnormal psychology explains how deficient mechanisms impair decision-making. Behavioral psychology focuses on choice behavior and behavioral regularities. Cognitive psychology describes the mechanisms of valuation, goal representation, preferences and how they contribute to decision-making. Comparative psychology analyzes the variations in neural, behavioral and cognitive processes among different clades. Developmental psychology establishes the evolution of decision-making mechanisms in the lifespan. Neuropsychology identify the neural substrates of these mechanisms. Personality psychology explains interindividual variations in decision-making, our various decision-making “profiles”. Social psychology can shed lights on social decision-making, that is, either collective decision-making (when groups or institutions make decisions) or individual decision-making in social context. Finally, we could also add environmental psychology (how agents use their environment to simplify their decisions) and evolutionary psychology (how decision-making mechanisms are – or are not – adaptations).

Related posts:


References
  • Barsalou, Breazeal, & Smith. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8(2), 79-91.
  • Brooks, R. A. (1999). Cambrian Intelligence : The Early History of the New Ai. Cambridge, Mass.: MIT Press.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Damasio, A. R. (1994). Descartes' Error : Emotion, Reason, and the Human Brain. New York: Putnam.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., & Fehr, E. (2004). The Neural Basis of Altruistic Punishment. Science, 305(5688), 1254-1258.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Herrnstein, R. J. (1961). Relative and Absolute Strength of Response as a Function of Frequency of Reinforcement. J Exp Anal Behav., 4(4), 267–272.
  • Hu, H., & Brady, M. (1996). A Parallel Processing Architecture for Sensor-Based Control of Intelligent Mobile Robots. Robotics and Autonomous Systems, 17(4), 235-257.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Mataric, M. J. (1997). Behaviour-Based Control: Examples from Navigation, Learning, and Group Behaviour. Journal of Experimental & Theoretical Artificial Intelligence, 9(2 - 3), 323-336.
  • Mataric, M. J. (2002). Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. Imitation in Animals and Artifacts, 391–422.
  • Nilsson, N. J. (1984). Shakey the Robot: SRI International.
  • Pfeifer, R., & Scheier, C. (1999). Understanding Intelligence. Cambridge, Mass.: MIT Press.
  • Platt, M. L., & Glimcher, P. W. (1999). Neural Correlates of Decision Variables in Parietal Cortex. Nature, 400(6741), 238.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.



10/3/07

What is Wrong with the Psychology of Decision-Making?

In psychology textbooks, decision-making figures most of the time in the last chapters. These chapters usually acknowledge the failure of the Homo economicus model and propose to understand human irrationality as the product of heuristics and biases, that may be rational under certain environmental conditions. In a recent article, H.A. Gintis documents this neglect:

(…) a widely used text of graduate- level readings in cognitive psychology, (Sternberg & Wagner, 1999) devotes the ninth of eleven chapters to "Reasoning, Judgment, and Decision Making," offering two papers, the first of which shows that human subjects generally fail simple logical inference tasks, and the second shows that human subjects are irrationally swayed by the way a problem is verbally "framed" by the experimenter. A leading undergraduate cognitive psychology text (Goldstein, 2005) placed "Reasoning and Decision Making" the last of twelve chapters. This includes one paragraph describing the rational actor model, followed by many pages purporting to explain why it is wrong. (…) in a leading behavioral psychology text (Mazur, 2002), choice is covered in the last of fourteen chapters, and is limited to a review of the literature on choice between concurrent reinforcement schedules and the capacity to defer gratification (Gintis, 2007, pp. 1-2)
Why? The standard conception of decision-making in psychology can be summarized by two claims, one conceptual, one empirical. Conceptually, the standard conception holds that decision-making is a separate topic: it is one of the subjects that psychologists may study, together with categorization, inference, perception, emotion, personality, etc. As Gintis showed, decision-making has its own chapters (usually the lasts) in psychology textbooks. On the empirical side, the standard conception construes decision-making is an explicit deliberative process, such as reasoning. For instance, in a special edition of Cognition on decision-making (volume 49, issues 1-2, Pages 1-187), one finds the following claims:

Reasoning and decision making are high-level cognitive skills […]
(Johnson-Laird & Shafir, 1993, p. 1)

Decisions . . . are often reached by focusing on reasons that justify the selection of one option over another

(Shafir et al., 1993, p. 34)

Hence decision-making is studied mostly by multiple-choice tests using the traditional paper and pen method, which clearly suggests that deciding is considered as an explicit process. Psychological research thus assumes that the subjects’ competence in probabilistic reasoning as revealed by these tests is a good description of their decision-making capacities.

These two claims, however, are not unrelated. Since decision-making is a central, high-level faculty that stands between perception and action, it can be studied in isolation. They constitute a coherent whole, something that philosophers of science would call a paradigm. This paradigm is built around a particular view of decision-making (and more generally, cognition) that could be called “cogitative”:

Perception is commonly cast as a process by which we receive information from the world. Cognition then comprises intelligent processes defined over some inner rendition of such information. Intentional action is glossed as the carrying out of commands that constitute the output of a cogitative, central system. (Clark, 1997, p. 51)


In another post, I'll present an alternative to the Cogitative conception, based on research in neuroeconomics, robotics and biology.

You can find Gintis's article on his page, together with other great papers.

References

  • Clark, A. (1997). Being There : Putting Brain, Body, and World Together Again. Cambridge, Mass.: MIT Press.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Goldstein, E. B. (2005). Cognitive Psychology : Connecting Mind, Research, and Everyday Experience. Australia Belmont, CA: Thomson/Wadsworth.
  • Johnson-Laird, P. N., & Shafir, E. (1993). The Interaction between Reasoning and Decision Making: An Introduction. Cognition, 49(1-2), 1-9.
  • Mazur, J. E. (2002). Learning and Behavior (5th ed.). Upper Saddle River, N.J.: Prentice Hall.
  • Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-Based Choice. Cognition, 49(1-2), 11-36.
  • Sternberg, R. J., & Wagner, R. K. (1999). Readings in Cognitive Psychology. Fort Worth, TX: Harcourt Brace College Publishers.



9/24/07

Neuroeconomics in the Annual Review of Psychology

A great team of (neuro) economists/psychologists, George Loewenstein, Scott Rick and Jonathan Cohen, wrote an extensive review paper about neuroeconomics for the 2008 Annual Review of Psychology. It presents all important research papers, discusses how neuroeconomics shed light--from a psychological and economic point of view--on decision-making under risk and uncertainty, intertemporal choice, and social decision making and, finally, show how this research can contribute to psychology. It is a highly recommended paper. One important missing topic, however, is all the neuroeconomics literature about hormomes and behavior, such as Paul Zak's lab research, for instance, the study that shows that oxytocin increases trust (and generosity; see this post): players transfer more money, in the trust game, after inhaling oxytocin (an hormone involved in social cognition, fear reduction, bonding, love, etc.). Anyway, here is a short summary (from the paper):

  1. Neuroeconomics has further bridged the once disparate fields of economics and psychology, largely due to movement within economics. Change has occurred within economics because the most important findings in neuroeconomics have posed a challenge to the standard economic perspective.
  2. Neuroeconomics has primarily challenged the standard economic assumption that decision making is a unitary process—a simple matter of integrated and coherent utility maximization—suggesting instead that it is driven by the interaction between automatic and controlled processes.
  3. Neuroeconomic research has focused most intensely on decision making under risk and uncertainty, but this line of research provides only mixed support for a dual systems perspective.
  4. The extent to which intertemporal choice is generated by multiple systems with conflicting priorities is perhaps the most hotly debated issue within neuroeconomics. However, a majority of the evidence favors a multiple systems perspective.
  5. Neuroeconomic research on social preferences is highly supportive of a dual systems account, although the most prominent studies come to conflicting conclusions regarding how selfinterest and fairness concerns interact to influence behavior.
  6. Neuroeconomics may ultimately influence psychology indirectly, via its influence on economics (e.g., by inspiring economic models increasingly grounded in psychological reality), and directly, by addressing debates of interest within psychology (e.g., whether multiple systems operate sequentially or in parallel to influence behavior).

References
  • Loewenstein, G., Rick, S., & Cohen, J. (2008). Neuroeconomics. Annual Review of Psychology, 59(1). (published online as a Review in Advance on September 17, 2007)



9/21/07

Neuroeconomics, folk-psychology, and eliminativism



conventional wisdom has long modeled our internal cognitive processes, quite wrongly, as just an inner version of the public arguments and justifications that we learn, as children, to construct and evaluate in the social space of the dinner table and the marketplace. Those social activities are of vital importance to our collective commerce, both social and intellectual, but they are an evolutionary novelty, unreflected in the brain’s basic modes of decision-making
(Churchland, 2006, p. 31).


The folk-psychological model of rationality construes rational decision-making as the product of a practical reasoning by which an agent infers, from her beliefs and desires, the right action to do. Truely, when we are asked to explain or predict actions, our intuitions lead us to describe them as the product of intentional states. In a series of studies, Malle and Knobe (Malle & Knobe, 1997, 2001) showed that folkpsychology is a language game where beliefs, desires and intentions are the main players. But using the intentional idiom does not mean that it picks out the real causes of action. This is where realist, instrumentalist and eliminativist accounts conflict. A realist account of beliefs and desires takes them to be real causal entities, an instrumentalist account treat them as useful fictions while an eliminativist account suggests that they are embedded in a faulty theory of mental functioning that should be eliminated (see Paul M. Churchland & Churchland, 1998; Dennett, 1987; Fodor, 1981). Can neuroeconomics shed light on this traditional debate in philosophy and cognitive science?

Neuroeconomics, I suggest, support an eliminativist approach of cognition. Just like contemporary chemistry does not explain combustion by a release of phlogiston (a substance supposed to exist in combustible bodies), cognitive science should stop explaining actions as the product of beliefs and desires. Behavioral regularities and neural mechanisms are sufficient to explain decision. When subjects evaluate whether or not they would buy a product, and whether or not the price seems justified, how informative is it to cite propositional attitudes as causes? The real entities involved in decision-makings are neural mechanisms involved in hedonic feelings, cognitive control, emotional modulation, conflict monitoring, planning, etc. Preferences, utility functions or practical reasoning, for instance, can explain purchasing, but they do not posit entities that can enter the “causal nexus” (Salmon, 1984). Neuroeconomics explains purchasing behavior not as an inference from beliefs-desire to action, but as a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). Prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing (Knutson et al., 2007). Hence the explanation of purchasing cites causes (brain areas) that explain the purchasing behavior as the product of a higher activation in prefrontal area and that justifies the decision to purchase: the agents had a stronger incentive to buy. A fully mechanistic account would, of course, details the algorithmic process performed by each area.
The belief-desire framework implicitly supposes that the causes of an action are those that an agent would verbally express when asked to justify her action. But on what grounds can this be justified?

Psychological and neural studies suggest rather a dissociation between the mechanisms that lead to actions and the mechanisms by which we explain them. Since Nisbett & Wilson (1977) seminal studies, research in psychology showed that the very act of explaining the intentional causes of our actions is a re-constructive process that might be faulty. Subjects give numerous reasons as to why they prefer one pair of socks (or other objects) to another, but they all prefer the last one on the right. The real explanation of their preferences is a position effect, or right-hand bias. For some reason, subjects pick the right-hand pair and, post hoc, generate an explanation for this preference, a phenomena widely observed. For instance, when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and neuroimaging studies mirrored this branding effect. Sensory information (taste) and cultural information (brand) are associated with different areas that interact so as to bias preferences. Without the label, the drink evaluation relies solely on sensory information. Subjects may motivate their preferences for one beverage over another with many diverse arguments, but the real impact on their preference is the brand’s label. The conscious narrative we produce when rationalizing our actions are not “direct pipeline[s] to nonconscious mental processes” (Wilson & Dunn, 2004, p. 507) but approximate reconstructions. When our thoughts occur before the action, when they are consistent with the action and appear as the only cause of the action, we infer that these thoughts are the causes of the actions, and rule out other internal or external causes (Wegner, 2002). But the fact that we rely on the belief-desire framework to explain our and others’ action as the product of intentional states do not constitute an argument for considering that these states are satisfying causal explanation of action.

The belief-desire framework might be a useful conceptual scheme for fast and frugal explanations, but it does not make folkpsychological constructs suitable for scientific explanation. In the same vein, if folkbiology would be the sole foundation of biology, whales would still be categorized as fish. The nature of the biological world is not explained by our (faulty and biased) folkbiology, but by making explicit the mechanism of natural selection, reproduction, cellullar growth, etc. There is no reason to believe that our folkpsychology is a better description of mental mechanisms. Beliefs, desires and intentions are folk-psychological constructs that have no counterpart in neuroscience. Motor control and action planning, for instance, are explained by different kinds of representation such as forward and inverse models, not propositional attitudes (Kawato & Wolpert, 1998; Wolpert & Kawato, 1998). Consequently, the fact that we rely on fokpsychology to explain actions does not constitute an argument for considering that this naïve theory provides reliable explanations of actions. Saying that the sun rises every morning is a good prediction, it could explains why there is more heat and light at noon, but the effectiveness of the sun-rising framework does not justifies its use as a scientific theory.

As many philosophers of science suggested, a genuine explanation is mechanistic: it consists in breaking a system in parts and process, and explaining how these parts and processes cause the system to behave the way it does (Bechtel & Abrahamsen, 2005; Craver, 2001; Machamer et al., 2000). Folkpsychology may save the phenomena, it still does not propose causal parts and processes. More generally, the problem with the belief-desire framework is that it is a description of our attitude toward things we call "agent", not a description of what constitutes the true nature of agents. Thus, it conflates the map and the territory. Moreover, conceptual advance is made when objects are described and classified according to their objective properties. A chemical theory that classifies elements according to their propensity to quench thirst would be a non-sense (although it could be useful in other context). At best, the belief-desire framework could be considered as an Everyday Handbook of Intentional Language.

References

  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.
  • Churchland, P. M. (2006). Into the Brain: Where Philosophy Should Go from Here. Topoi, 25(1), 29-32.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Craver, C. F. (2001). Role Functions, Mechanisms, and Hierarchy. Philosophy of Science, 68, 53-74.
  • Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Fodor, J. A. (1981). Representations : Philosophical Essays on the Foundations of Cognitive Science (1st MIT Press ed.). Cambridge, Mass.: MIT Press.
  • Kawato, M., & Wolpert, D. M. (1998). Internal Models for Motor Control. Novartis Found Symp, 218, 291-304; discussion 304-297.
  • Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking About Mechanisms. Philosophy of Science, 67, 1-24.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Malle, B. F., & Knobe, J. (2001). The Distinction between Desire and Intention: A Folk-Conceptual Analysis. In B. F. M. L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 45-67). Cambridge, MA: MIT Press.
  • McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84, 231-259.
  • Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the World. Princeton, N.J.: Princeton University Press.
  • Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
  • Wilson, T. D., & Dunn, E. W. (2004). Self-Knowledge: Its Limits, Value, and Potential for Improvement. Annual Review of Psychology, 55(1), 493-518.
  • Wolpert, D. M., & Kawato, M. (1998). Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11(7-8), 1317.



8/9/07

“Note that he relies on you”; how a single sentence enhances altruism in a Dictator game

In a recent study, experimental economist Pablo Branas-Garza showed that a single sentence is enough to promote fairness. He conducted two experiments, one in a classroom, the other being a regular economic experiments, where subjects had to play a Dictator Game. In every experiments, there was a baseline condition (subjects where presented with a description of the game), and a framing condition: at the end of the text, a sentence reads “Note that your opponent relies on you”. Results: adding that sentence increased donations. As fig. 1 shows, the framing boosts altruism and reduces selfishness: low offers are much rarer.




fig. 1, from Branas-Garza, 2007.



What is surprising is not that subjects are sensible to certain moral-social cues, but that such a simple cue (7 words) is sufficient. The more we know about each others, the less selfish we are.




7/23/07

The selective impairment of prosocial sentiments and the moral brain


Philosophers often describes the history of philosophy as a dispute between Plato (read: idealism/rationalism) and Aristotle (read:materialism/empiricism). It is of course extremely reductionist since many conceptual and empirical issues where not addressed in Ancient Greece, but there is a non-trivial interpretation of the history of thought according to which controversies often involves these two positions. In moral philosophy and moral psychology, however, the big figures are Hume and Kant. Is morality based on passions (Hume) or reasons (Kant)? This is another simplification, but again it frames the debate. In the last issue of Trends in Cognitive Science(TICS), three papers discusses the reason/emotions debate but provides more acute models.

Recently (see this previous post), Koenig and other collaborators (2007b) explored the consequences of ventromedial prefrontal cortex (VMPC) lesions in moral reasoning and showed that they tend to rely a little more on a 'utilitarian' scheme (cost/benefit), and less on a deontological scheme (moral do's and don'ts ), thus suggesting that emotions are involved in moral deontological judgement. These patients, however, were also more emotional in the Ultimatum game, and rejected more offers than normal subjects. So are they emotional or not? In the first TICS paper, Moll and de Oliveira-Souza review the Koenig et al. (2007a) experiment and argue that neither somatic markers nor dual-process theory explains these findings. They propose that a selective impairment of prosocial sentiments explains why the same patient are both less emotional in moral dilemma but more emotional in economic bargaining. These patients can feel less compassion but still feel anger. In a second paper, Greene (author of the research on the trolley problems, see his homepage) challenge this interpretation and put forward his dual-process view (reason-emotion interaction). Moll and de Oliveira-Souza reply in the third paper. As you can see, there is still a debate between Kant and Hume, but cognitive neuroscience provides new tools for both sides of the debates, and maybe even a blurring of these opposites.


References



7/18/07

Altruism: a research program

Phoebe: I just found a selfless good deed; I went to the park and let a bee sting me.
Joey
: How is that a good deed?

Phoebe
:
Because now the bee gets to look tough in front of his bee friends. The bee is happy and I am not.
Joey:
Now you know the bee probably died when he stung you?
Phoebe:
Dammit!
- [From
Friends, episode 101]
Altruism is a lively research topic. The evolutionary foundations, neural substrates, psychological mechanisms, behavioral manifestations, formal modeling and philosophical analyses of cooperation constitute a coherent—although not unified—field of inquiry. See for instance how neuroscience, game theory, economic, philosophy, psychology and evolutionary theory interact in Penner et al. 2005; Hauser 2006; Fehr and Fischbacher 2002; Fehr and Fischbacher 2003. The nature of prosocial behavior, from kin selection to animal cooperation to human morality can be considered as a progressive Lakatosian research programs. Altruism has a great conceptual "sex-appeal" because it is mystery for two types of theoreticians: biologists and economists. They both wonder why an animal or an economic agent would help another: since these agents maximize fitness/utility, altruistic behavior is suboptimal. Altruims (help, trust, fairness, etc.) seems intuitively incoherent with economic rationality and biological adaptation, with markets and natural selection. Or is it?

In the 60's, biologists challenged the idea that natural selection is incompatible with altruism. Hamilton (1964a, 1964b) and Trivers (1971) showed that biological altruism makes sense. An animal X might behave altruistically toward another Y because they are genetically related: in doing so, X maximize the copying of its gene, since many of its genes will be hosted in Y. Thus the more X and Y are genetically related, the more X will be ready to help Y. This is kin altruism. Altruism can also be reciprocal: scratch my back and I'll scratch yours. Tit-for-tat, or reciprocal altruism also makes sense because by being altruistic, one may augments its payoff. X helps Y, but the next time Y will help X; thus it is better to help than not to help. In both cases, the idea is that altruism is a mean not an end. Others argue that more complex types of altruisms exists. For instance, X can help Y because Y already helped Z (indirect reciprocity). In this case, the tit-for-tat logic is extended to agents that the helper did not meet in the past. Generalized reciprocity (see this previous post) is another type of altruism: helping someone because someone helped you in the past. This altruism does not require memory or personal identification. X helps someone because someone else helped X. Finally, Strong reciprocity is the idea that humans display genuine altruism: strong reciprocators cooperate with cooperators, do not cooperate with cheaters, and are ready to punish cheaters even at a cost to themselves. Their proponents argue that it evolved through group selection.

Experimental economics and neuroeconomics also challenged the idea of rational, greedy, selfish actor (the Ayn Rand hero). Experimental game theory showed that, contrarily to orthodox game theory, subjects cooperate massively in prisoner’s dilemma (Ledyard, 1995; Sally, 1995). Rilling et al. showed that players enjoy cooperating. Players who initiate and players who experience mutual cooperation display activation in nucleus accumbens and other reward-related areas such as the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex (Rilling et al., 2002). In another experiment, the presentation of faces of intentional cooperators caused increased activity in reward-related areas (Singer et al. 2004). In the ultimatum game, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Brain scans of people playing the ultimatum game indicate that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula (associated with negative emotional states like disgust or anger) is more active when unfair offers are proposed (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003). Subjects experiment this affective reaction to unfairness only when the proposer is a human being: the activation is significantly lower when the proposer is a computer. Moreover, the anterior insula activation is proportional to the degree of unfairness and correlated with the decision to reject unfair offers (Sanfey et al., 2003: 1756). Fehr and Fischbacher (2002) suggested that economic agents are inequity-averse and have prosocial preferences. Thus they modified the utility functions to account for behavioral (and now neural) data. In Moral Markets: The Critical Role of Values in the Economy, Paul Zak proposes a radically different conception of morality in economics:

The research reported in this book revealed that most economic exchange, whether with a stranger or a known individual, relies on character values such as honesty, trust, reliability, and fairness. Such values, we argue, arise in the normal course of human interactions, without overt enforcement—lawyers, judges or the
police are present in a paucity of economic transactions (...). Markets are moral in two senses. Moral behavior is necessary for exchange in moderately regulated markets, for example, to reduce cheating without exorbitant
transactions costs. In addition, market exchange itself can lead to an understanding of fair-play that can build social capital in nonmarket settings. (Zak, forthcoming)

See how this claim is similar to :

The two fundamental principles of evolution are mutation and natural selection. But evolution is constructive because of cooperation. New levels of organization evolve when the competing units on the lower level begin to cooperate. Cooperation allows specialization and thereby promotes biological diversity. Cooperation is the secret behind the open-endedness of the evolutionary process. Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add "natural cooperation" as a third fundamental principle of evolution beside mutation and natural selection.
(Nowak, 2006)

Hence, biological and economic theorizing followed a similar path: they started first with the assumption that agents value only their own payoff; evidence suggested then that agents behave altruistically and, finally, theoretical models were amended and now incorporate different kinds of reciprocity.

So is it good news? Are we genuinely altruistic? First a precision: there is a difference between biological and psychological altruism, and the former does not entail the latter; biological altruism is about fitness consequencences (survival and reproduction), while psychological altruism is about motivation and intentions:

Where human behaviour is concerned, the distinction between biological altruism, defined in terms of fitness consequences, and ‘real’ altruism, defined in terms of the agent's conscious intentions to help others, does make sense. (Sometimes the label ‘psychological altruism’ is used instead of ‘real’ altruism.) What is the relationship between these two concepts? They appear to be independent in both directions (...). An action performed with the conscious intention of helping another human being may not affect their biological fitness at all, so would not count as altruistic in the biological sense. Conversely, an action undertaken for purely self-interested reasons, i.e. without the conscious intention of helping another, may boost their biological fitness tremendously (Biological Altruism, Stanford Encyclopedia of Philosophy; see also a forthcoming paper by Stephen Stich and the classic Sober & Wilson 1998).

The interesting question, for many researchers, is then: what is the link between biological and psychological altruism? A common view suggests non-human animals are biological altruists, while humans are also psychological atruists. I would like argue against this sharp divide and briefly suggest three things:
  1. Non-humans also display psychological altruism
  2. Human altruism is strongly influenced by biological motives
  3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

1. Non-humans also display psychological altruism


A discussed in a previous post, a recent research paper showed that rats exhibit generalized reciprocity: rats who had previously been helped were more likely (20%) to help unknown partner than rats who had not been helped. Although the authors of the paper take a more prudent stance, I consider generalized reciprocity as psychological altruism (remember, it can be both): rats cooperate because they "feel good", and that feeling is induced by cooperation, not by a particular agent. Hence their brain value cooperation (probably thanks to hormonal mechanisms similar to ours) in itself, even if there is no direct tit-for-tat. In the same edition of PLoS biology, primatologist Frans de Waal (2007) also argue that animals show signs of psychological altruism; it it particularly clear in an experiment (Warneken et al, again, in the same journal) that show that chimpanzees are ready to help unknown humans and conspecifics (hence ruling out kin and tit-for-tat altruism), even at a cost to themselves. Here is the description of the experiments:

In the first experiment, the chimpanzee saw a person unsuccessfully reach through the bars for a stick on the other side, too far away for the person, but within reach of the ape. The chimpanzees spontaneously helped the reaching person regardless of whether this yielded a reward, or not. A similar experiment with 18-month-old children gave exactly the same outcome. Obviously, both apes and young children are willing to help, especially when they see someone struggling to reach a goal. The second experiment increased the cost of helping. The chimpanzees were still willing to help, however, even though now they had to climb up a couple of meters, and the children still helped even after obstacles had been put in their way. Rewards had been eliminated altogether this time, but this hardly seemed to matter. One could, of course, argue that chimpanzees living in a sanctuary help humans because they depend on them for food and shelter. How familiar they are with the person in question may be secondary if they simply have learned to be nice to the bipedal species that takes care of them. The third and final experiment therefore tested the apes' willingness to help each other, which, from an evolutionary perspective, is also the only situation that matters. The set-up was slightly more complex. One chimpanzee, the Observer, would watch another, its Partner, try to enter a closed room with food. The only way for the Partner to enter this room would be if a chain blocking the door were removed. This chain was beyond the Partner's control—only the Observer could untie it. Admittedly, the outcome of this particular experiment surprised even me—and I am probably the biggest believer in primate empathy and altruism. I would not have been sure what to predict given that all of the food would go to the Partner, thus creating potential envy in the Observer. Yet, the results were unequivocal: Observers removed the peg holding the chain, thus yielding their Partner access to the room with food (de Waal)
(image from Warneken et al video)

2. Human altruism is strongly influenced by biological motives

In many cases, human altruism appear as a complex version of biological altruism (see Burnham & Johnson, 2005. The Biological and Evolutionary Logic of Human Cooperation for a review). For instance, Madsen et al. (2007) showed that humans behave more altruistically toward their own kin when there is a significant genuine cost (such as muscular pain), an attitude also mirrored in study with questionnaires (Stewart-Williams 2007): when the cost of helping augments, subjects are more ready to help siblings than friends. Other studies showed that facial similarity enhances trust (DeBruine 2002). In each cases, there is a mechanism whose function is to negotiate personal investments in relationships in order to promote the copying of genes housed either in people of—or people who seems to be of—our kin.

Many of these so called altruistic behavior can be explained only by the operations of hyper-active agency detectors and a bias toward fearing other people’s judgement. When they are not being or feeling watched, peoples behave less altruistically. Many studies show that in the dictator game, a version of the ultimatum game where the responder has to accept the offer, subjects always make lower offers than in the ultimatum (Bolton, Katok, and Zwick 1998). Offers are even lower in the dictator game when donation is fully anonymous (Hoffman et al. 1994). When subjects feel watched, or think of agents, even supernatural ones, they tend to be much more altruistic. When a pair of eyes is displayed in a computer screen, almost twice as many participants transfer money in the dictator game (Haley and Fessler 2005), and people contribute 3 times more in an honesty box for coffee' when there is a pair of eyes than when there is pictures of a flower (Bateson, Nettle, and Roberts 2006). The sole fact of speaking of ghosts enchances honest behavior in a competitive taks (Bering, McLeod, and Shackelford 2005), while priming subjects with the God concept in the anonymous dictator game (Shariff and Norenzayan in press).

These reflections also applies to altruistic punishment. First, it is enhanced by an audience. (Kurzban, DeScioli, and O'Brien 2007) showed that with a dozen participants, punishment expenditure tripled. Again, appareant altruism is instrumental in personal satisfaction. Other research suggest that altruism is also an advantage in sexual selection: "people preferentially direct cooperative behavior towards more attractive members of the opposite sex. Furthermore, cooperative behavior increases the perceived attractiveness of the cooperator" (Farrelly et al., 2007).

An interesting framework to understand altruims is Hardy (no relation with me) & Van Vugt (2006) theory of competitive altruism: "individuals attempt to outcompete each other in terms of generosity. It emerges because altruism enhances the status and reputation of the giver. Status, in turn, yields benefits that would be otherwise unattainable." We need, however, a more general perspective.


3. Prosocial behavior in human and non-human animals should be understood as a single phenomena: cooperation in the economy of nature

All organic beings are striving to seize on each place in the economy of nature - (Darwin, [1859] 2003, p. 90)

With Darwin, natural economy began to be understood with the conceptual tools of political economy. The division of labor, competition (“struggle” in Darwin’s words), trading, cost, the accumulation of innovations, the emergence of complex order from unintentional individual actions, the scarcity of resources and the geometric growth of populations are ideas borrowed from Adam Smith, Thomas Malthus, David Hume and other founders of modern economics. Thus, the economy of nature ceased to be an abstract representation of the universe and became a depiction of the complex web of interactions between biological individuals, species and their environment—the subject matter of ecology. Consequently, Darwin’s main contributions are his transforming biology into a historical science—like geology—and into an economic science.

I take the economy-of-nature principle to be a refinement of the natural selection principle: while it describes general features of the biosphere, it puts emphasis on the intersection between individual biographies and natural selection, and especially on decision-making. On the one hand, the decisions biological individuals make increase or decrease their fitness, and thus good decision-makers are more likely to propagate their genes. On the other hand, natural selection is likely to favor good decision-makers and to get rid of bad decision-makers. Thus, if our best descriptive theories of animal and human economic behavior indicate that all these agents have prosocial preferences and make altruistic decisions, then these preferences and decisions are not maladaptive and irrational. They must have an evolutionary and an economic payoff. Markets and natural selections requires cooperation, even if the deep motivations are partly selfish. Fairness, equity and honesty are social goods in the economy of nature, human and non-human.


  • Bateson, M., D. Nettle, and G. Roberts. 2006. Cues of being watched enhance cooperation in a real-world setting. Biology Letters 12:412-414.
  • Bering, J. M., K. McLeod, and T. K. Shackelford. 2005. Reasoning about Dead Agents Reveals Possible Adaptive Trends. Human Nature 16 (4):360-381.
  • Bolton, G. E., E. Katok, and R. Zwick. 1998. Dictator Game Giving: Rules of Fairness versus Acts of Kindness International Journal of Game Theory 27 269-299
  • Burnham, T. C., and D. D. P. Johnson. 2005. The Biological and Evolutionary Logic of Human Cooperation. Analyse & Kritik 27:113-135.
  • DeBruine, L. M. 2002. Facial resemblance enhances trust. Proc Biol Sci 269 (1498):1307-12.
  • de Waal FBM (2007) With a Little Help from a Friend. PLoS Biol 5(7): e190 doi:10.1371/journal.pbio.0050190
  • Farrelly, D., J. Lazarus, and G. Roberts. 2007. Altruists attract. Evolutionary Psychology 5 (2):313-329.
  • Fehr, E., and U. Fischbacher. 2002. Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal 112:C1-C33.
  • Fehr, Ernst, and Urs Fischbacher. 2003. The nature of human altruism. Nature 425 (6960):785-791.
  • Hamilton, W. D. 1964a. The genetical evolution of social behaviour. I. J Theor Biol 7 (1):1-16.
  • ———. 1964b. The genetical evolution of social behaviour. II. J Theor Biol 7 (1):17-52.
  • Hauser, Marc D. 2006. Moral minds : how nature designed our universal sense of right and wrong. New York: Ecco.
  • Ledyard, J. O. 1995. Public goods: A survey of experimental research. In Handbook of experimental economics, edited by J. H. Kagel and A. E. Roth: Princeton University Press.
  • Haley, K., and D. Fessler. 2005. Nobody’s watching? Subtle cues affect generosity in an anonymous economic game. Evolution and Human Behavior 26 (3):245-56.
  • Hoffman, E., K. Mc Cabe, K. Shachat, and V. Smith. 1994. Preferences, Property Rights, and Anonymity in Bargaining Experiments. Games and Economic Behavior 7:346–380.
  • Kurzban, Robert, Peter DeScioli, and Erin O'Brien. 2007. Audience effects on moralistic punishment. Evolution and Human Behavior 28 (2):75-84.
  • Madsen, Elainie A., Richard J. Tunney, George Fieldman, Henry C. Plotkin, Robin I. M. Dunbar, Jean-Marie Richardson, and David McFarland. 2007. Kinship and altruism: A cross-cultural experimental study. British Journal of Psychology 98:339-359.
  • Penner, Louis A., John F. Dovidio, Jane A. Piliavin, and David A. Schroeder. 2005. Prosocial behavior: Multilevel Perspectives. Annual Review of Psychology 56 (1):365-392.
  • Okasha, Samir, "Biological Altruism", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2005/entries/altruism-biological/.
  • Stewart-Williams, Steve. 2007. Altruism among kin vs. nonkin: effects of cost of help and reciprocal exchange. Evolution and Human Behavior 28 (3):193-198.
  • Nowak, M. A. 2006. Five Rules for the Evolution of Cooperation. Science 314 (5805):1560-1563.
  • Oosterbeek, H., Randolph S., and G. van de Kuilen. 2004. Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics 7:171-188.
  • Rilling, J., D. Gutman, T. Zeh, G. Pagnoni, G. Berns, and C. Kilts. 2002. A neural basis for social cooperation. Neuron 35 (2):395-405.
  • Rutte C, Taborsky M (2007) Generalized Reciprocity in Rats. PLoS Biol 5(7): e196 doi:10.1371/journal.pbio.0050196
  • Sally, D. 1995. Conversations and cooperation in social dilemmas: a meta-analysis of experiments from 1958 to 1992. Rationality and Society 7:58 – 92
  • Sanfey, A. G., J. K. Rilling, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. 2003. The neural basis of economic decision-making in the Ultimatum Game. Science 300 (5626):1755-8.
  • Shariff, A.F. , and A. Norenzayan. in press. God is watching you: Supernatural agent concepts increase prosocial behavior in an anonymous economic game. Psychological Science.
  • Singer, T., S. J. Kiebel, J. S. Winston, R. J. Dolan, and C. D. Frith. 2004. Brain responses to the acquired moral status of faces. Neuron 41 (4):653-62.
  • Sober, Elliott, and David Sloan Wilson. 1998. Unto others : the evolution and psychology of unselfish behavior. Cambridge, Mass.: Harvard University Press.
  • Stich, S. (forthcoming). Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson's Argument for Psychological Altruism, to appear in Biology and Philosophy.
  • Trivers, R. L. 1971. The Evolution of Reciprocal Altruism. Quarterly Review of Biology 46 (1):35.
  • Warneken F, Hare B, Melis AP, Hanus D, Tomasello M (2007) Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biol 5(7): e184 doi:10.1371/journal.pbio.0050184
  • Zak, P. J., ed. forthcoming. Moral Markets: The Critical Role of Values in the Economy. Princeton, N.J.: Princeton University Press.



7/11/07

Decision-Making: A Neuroeconomic Perspective

I put a new paper on my homepage :

Decision-Making: A Neuroeconomic Perspective

Here is the abstract:

This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality.

Hardy-Vallée, B. (forthcoming). Decision-making: a neuroeconomic perspective. Philosophy Compass. [PDF]

This paper is the first in my philosophical exploration of neuroeconomics, and I would gladly welcome your comments and suggestions for subsequent research. Email me at benoithv@gmail.com.



7/10/07

Neuropsychopharmacology textbook: this is your brain on drugs

I discovered (thanks to Mind Hacks) that the American College of Neuropsychopharmacology put online a text book on, well, Neuropsychopharmacology. An enormous source of information on how drugs affect the brain. Here is (from p. 120) a schema of dopaminergic systems:


(click to enlarge)


Visit the link below to read the online text book:



4/18/07

Decision-Making in Philosophy, Economics and Psychology

(An overview of different conceptions of decision-making in philosophy, economics and psychology.)

Rational agents display their rationality mainly in making decisions. Certain decisions are more basic (turn left or turn right), others are crucial issues (“to be or not to be”). In any case, being an agent entails making choices. Even abstinence is decision, as thinkers like William James or Jean-Paul Sartre once pointed out. In our ordinary use of the word, our folk-psychology inclines us to believe that making a decision implies a deliberation: a weighting of beliefs, desires and intentions (Malle et al., 2001). In philosophy of mind, the standard conception of decision-making equates deciding and forming an intention before an action (Davidson, 1980, 2004; Hall, 1978; Searle, 2001). According to different analysis, this intention can be equivalent to, inferred from or accompanied by, desires and beliefs. Thus, the decisions rational agents make are motivated by reasons. Rational actions are explained by these reasons, the purported causes of the actions. Beliefs and desires are also constitutive of rationality because they justify rational action: there is a logical coherence between beliefs, desires and actions. Actions are irrational when their causes do not justify them. Beliefs and desires are embedded in our interpretations of rational agents as rational agents: “[a]nyone who superimposes the longitudes of desire and the latitudes of belief is already attributing rationality” (Sorensen, 2004, p. 291). Hence, on this account, X is a rational agent if X can be interpreted as an agent whose actions are justified by the beliefs and desires that caused her to make a particular choice. The attribution of rational agency is then based on the success of applying an interpretation scheme that presuppose the rationality of the agent, such as the Dennettian "intentional stance", the Davidsonian "principle of charity" or the Popperian "principle of rationality" (Davidson, 1980; Dennett, 1987; Popper, 1994).
The abstract structure of this interpretation scheme has been formalized by theoretical economics and rational-choice theory. Economics, according to a standard definition by Lionel Robbins, is the “science which studies human behavior as a relationship between ends and scarce means which have alternative uses” (Robbins, 1932, p. 15). This definition shows the centrality of decision-making in economic science: since means are scarce, behavior should use them efficiently. The two branches of rational-choice theory, decision theory and game theory, specifies the formal constraints on optimal decision-making in individual and interactive contexts. An individual agent facing a choice between two actions can make a rational decision is she takes into account two parameters: the probability and utility of the consequences of each action. By multiplying the subjective probability by the subjective utility of an action’s outcomes, she can select the action that have the higher subjective expected utility(see Baron, 2000, for an introduction). Game theory models agents making decisions in a strategic context, where the preferences of at least another agent must be taken into account. Decision-making is represented as the selection of a strategy in a game, that is, a set of rules that dictates the range of possible actions and the payoffs of any conjunct of actions. Thus, economic decision-making is mainly about computing probabilities and utilities (Weirich, 2004 ). The philosopher’s beliefs-desire model is hence reflected in the economist’s probability-utility model: probabilities represent beliefs while utilities represent desires.
Rational-choice theory can be construed as a normative theory (what agents should do) or as a descriptive one (what agents do). On its descriptive construal, rational-choice theory is a framework for building predictive models of choice behavior: which lottery an agent would select, whether an agent would cooperate or not in a prisoner’s dilemma, etc. Experimental economics, behavioral economics, cognitive science and psychology (I will refer to these empirical approaches of rationality as ‘psychology’) use this model to study how subjects make decisions and which mechanisms they rely on for choosing. These patterns of inference and behavior can then be compared with rational-choice theory. In numerous studies, Amos Tversky and Daniel Kahneman showed that decision-makers’ judgments deviate markedly from normative theories (Kahneman, 2003; Kahneman et al., 1982; Tversky, 1975). Subjects tend to make decisions according to their ‘framing’ of a situation (the way they represent the situation, e.g. as a gain or as a loss), and exhibit loss-, risk- and ambiguity-aversion (Camerer, 2000; Kahneman & Tversky, 1979, 1991, 2000; Thaler, 1980). In most of their experiments, Tversky and Kahneman asked subjects to choose among different options in fictive situations in order to assess the similarity between natural ways of thinking and normative decision theory. For instance, subjects were presented the following situation (Tversky & Kahneman, 1981):

Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:
- If Program A is adopted, 200 people will be saved
- If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor?

Most of the respondent opted for A, the risk-averse solution. When respondent were offered the following version:
- If Program A is adopted, 400 people will die
- If Program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die

Although Program A has exactly the same outcome in both versions (400 people die, 200 will be saved), in the second version Program B is the most popular. Thus, not only are subjects risk-averse, but their risk-aversion depends on the framing of the situation. Subjects have a different attitude whether a situation is presented as a gain or as a loss. The study of decision-making is thus the study of the heuristics and biases that impinge upon human judgment. The explanatory target is the discrepancies between rational-choice theory and human psychology. Just like the psychology of perception tries to explain visual illusions (e.g. the Muller-Lyer illusion), the psychology of decision tries to explain cognitive illusions: why agents prefer systematically one kind of prospect to another when rational-choice theory recommends another. Loss-aversion, for instance, can be explained by the shape of the value function: it is concave for gains and convex for losses. Thus loosing $100 hurts more than winning $100 makes one happy.
Proponent of the ecological rationality approach suggested nonetheless that these heuristics and bias might be adaptive in certain contexts and that failures of human rationality can be lessen in proper ecological conditions. For instance, when probabilities are presented as frequencies (6 out of 10) instead of subjective probabilities (60%), results tend to be much better, partly because we encounter more sequences of events than degrees of beliefs. These heuristics might be ‘fast and frugal’ procedures tailored for certain tasks, thus leading to suboptimal outcomes in other contexts. (Gigerenzer, 1991; Gigerenzer et al., 1999). Or they could be vestigial adaptations to ecological and social environments where our hunters-gatherers ancestors lived. Thus heuristics may not completely ineffective.


References

Baron, J. (2000). Thinking and deciding (3rd ed.). Cambridge, UK ; New York: Cambridge University Press.
Camerer, C. (2000). Prospect theory in the wild. In D. Kahneman & A. Tversky (Eds.), Choice, values, and frames (pp. 288-300). New York: Cambridge University Press.
Davidson, D. (1980). Essays on actions and events. Oxford: Oxford University Press.
Davidson, D. (2004). Problems of rationality. Oxford: Oxford University Press.
Dennett, D. C. (1987). The intentional stance. Cambridge, Mass.: MIT Press.
Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond heuristics and biases. European Review of Social Psychology, 2(S 83), 115.
Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple heuristics that make us smart. New York: Oxford University Press.
Hall, J. W. (1978). Deciding as a way of intending. The Journal of Philosophy, 75(10), 553-564.
Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. Am Psychol, 58(9), 697-720.
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty : Heuristics and biases. Cambridge ; New York: Cambridge University Press.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291.
Kahneman, D., & Tversky, A. (1991). Loss aversion in riskless choice: A reference-dependent model. The Quartely Journal of Economics, 106(4), 1039-1061.
Kahneman, D., & Tversky, A. (2000). Choices, values, and frames. Cambridge, UK: Cambridge University Press.
Malle, B. F., Moses, L. J., & Baldwin, D. A. (2001). Intentions and intentionality : Foundations of social cognition. Cambridge, Mass.: MIT Press.
Popper, K. R. (1994). Models, instruments, and truth: The status of the rationality principle in the social sciences. In The myth of the framework. In defence of science and rationality (pp. 154-184). London: Routledge.
Robbins, L. (1932). An essay on the nature and signifiance of economic science. London Macmillan.
Searle, J. (2001). Rationality in action. Cambridge, Mass.: MIT Press.
Sorensen, R. (2004). Charity implies meta-charity. Philosophy and Phenomenological Research, 26, 290-315.
Thaler, R. H. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior & Organization, 1(1), 39-60.
Tversky, A. (1975). A critique of expected utility theory: Descriptive and normative considerations. Erkenntnis, V9(2), 163-173.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and psychology of choice. Science, 211, 453-458.
Weirich, P. (2004 ). Economic rationality. In A. in Mele, & Rawlings, P. (Ed.), Oxford handbook of rationality (pp. 380–398). Oxford: Oxford University Press.