conventional wisdom has long modeled our internal cognitive processes, quite wrongly, as just an inner version of the public arguments and justifications that we learn, as children, to construct and evaluate in the social space of the dinner table and the marketplace. Those social activities are of vital importance to our collective commerce, both social and intellectual, but they are an evolutionary novelty, unreflected in the brain’s basic modes of decision-making
(Churchland, 2006, p. 31).
The folk-psychological model of rationality construes rational decision-making as the product of a practical reasoning by which an agent infers, from her beliefs and desires, the right action to do. Truely, when we are asked to explain or predict actions, our intuitions lead us to describe them as the product of intentional states. In a series of studies, Malle and Knobe (Malle & Knobe, 1997, 2001) showed that folkpsychology is a language game where beliefs, desires and intentions are the main players. But using the intentional idiom does not mean that it picks out the real causes of action. This is where realist, instrumentalist and eliminativist accounts conflict. A realist account of beliefs and desires takes them to be real causal entities, an instrumentalist account treat them as useful fictions while an eliminativist account suggests that they are embedded in a faulty theory of mental functioning that should be eliminated (see Paul M. Churchland & Churchland, 1998; Dennett, 1987; Fodor, 1981). Can neuroeconomics shed light on this traditional debate in philosophy and cognitive science?
Neuroeconomics, I suggest, support an eliminativist approach of cognition. Just like contemporary chemistry does not explain combustion by a release of phlogiston (a substance supposed to exist in combustible bodies), cognitive science should stop explaining actions as the product of beliefs and desires. Behavioral regularities and neural mechanisms are sufficient to explain decision. When subjects evaluate whether or not they would buy a product, and whether or not the price seems justified, how informative is it to cite propositional attitudes as causes? The real entities involved in decision-makings are neural mechanisms involved in hedonic feelings, cognitive control, emotional modulation, conflict monitoring, planning, etc. Preferences, utility functions or practical reasoning, for instance, can explain purchasing, but they do not posit entities that can enter the “causal nexus” (Salmon, 1984). Neuroeconomics explains purchasing behavior not as an inference from beliefs-desire to action, but as a tradeoff, mediated by prefrontal areas, between the pleasure of acquiring (elicited in the nucleus accumbens) and the pain of purchasing (elicited in the insula). Prefrontal activation predicted purchasing, while insular activation predicted the decision of not purchasing (Knutson et al., 2007). Hence the explanation of purchasing cites causes (brain areas) that explain the purchasing behavior as the product of a higher activation in prefrontal area and that justifies the decision to purchase: the agents had a stronger incentive to buy. A fully mechanistic account would, of course, details the algorithmic process performed by each area.
The belief-desire framework implicitly supposes that the causes of an action are those that an agent would verbally express when asked to justify her action. But on what grounds can this be justified?
Psychological and neural studies suggest rather a dissociation between the mechanisms that lead to actions and the mechanisms by which we explain them. Since Nisbett & Wilson (1977) seminal studies, research in psychology showed that the very act of explaining the intentional causes of our actions is a re-constructive process that might be faulty. Subjects give numerous reasons as to why they prefer one pair of socks (or other objects) to another, but they all prefer the last one on the right. The real explanation of their preferences is a position effect, or right-hand bias. For some reason, subjects pick the right-hand pair and, post hoc, generate an explanation for this preference, a phenomena widely observed. For instance, when subjects tasted samples of Pepsi and Coke with and without the brand’s label, they reported different preferences (McClure et al., 2004). Without labels, subjects evaluate both drinks similarly. When drinks were labeled, subjects report a stronger preference for Coke, and neuroimaging studies mirrored this branding effect. Sensory information (taste) and cultural information (brand) are associated with different areas that interact so as to bias preferences. Without the label, the drink evaluation relies solely on sensory information. Subjects may motivate their preferences for one beverage over another with many diverse arguments, but the real impact on their preference is the brand’s label. The conscious narrative we produce when rationalizing our actions are not “direct pipeline[s] to nonconscious mental processes” (Wilson & Dunn, 2004, p. 507) but approximate reconstructions. When our thoughts occur before the action, when they are consistent with the action and appear as the only cause of the action, we infer that these thoughts are the causes of the actions, and rule out other internal or external causes (Wegner, 2002). But the fact that we rely on the belief-desire framework to explain our and others’ action as the product of intentional states do not constitute an argument for considering that these states are satisfying causal explanation of action.
The belief-desire framework might be a useful conceptual scheme for fast and frugal explanations, but it does not make folkpsychological constructs suitable for scientific explanation. In the same vein, if folkbiology would be the sole foundation of biology, whales would still be categorized as fish. The nature of the biological world is not explained by our (faulty and biased) folkbiology, but by making explicit the mechanism of natural selection, reproduction, cellullar growth, etc. There is no reason to believe that our folkpsychology is a better description of mental mechanisms. Beliefs, desires and intentions are folk-psychological constructs that have no counterpart in neuroscience. Motor control and action planning, for instance, are explained by different kinds of representation such as forward and inverse models, not propositional attitudes (Kawato & Wolpert, 1998; Wolpert & Kawato, 1998). Consequently, the fact that we rely on fokpsychology to explain actions does not constitute an argument for considering that this naïve theory provides reliable explanations of actions. Saying that the sun rises every morning is a good prediction, it could explains why there is more heat and light at noon, but the effectiveness of the sun-rising framework does not justifies its use as a scientific theory.
As many philosophers of science suggested, a genuine explanation is mechanistic: it consists in breaking a system in parts and process, and explaining how these parts and processes cause the system to behave the way it does (Bechtel & Abrahamsen, 2005; Craver, 2001; Machamer et al., 2000). Folkpsychology may save the phenomena, it still does not propose causal parts and processes. More generally, the problem with the belief-desire framework is that it is a description of our attitude toward things we call "agent", not a description of what constitutes the true nature of agents. Thus, it conflates the map and the territory. Moreover, conceptual advance is made when objects are described and classified according to their objective properties. A chemical theory that classifies elements according to their propensity to quench thirst would be a non-sense (although it could be useful in other context). At best, the belief-desire framework could be considered as an Everyday Handbook of Intentional Language.
- Bechtel, W., & Abrahamsen, A. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.
- Churchland, P. M. (2006). Into the Brain: Where Philosophy Should Go from Here. Topoi, 25(1), 29-32.
- Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
- Craver, C. F. (2001). Role Functions, Mechanisms, and Hierarchy. Philosophy of Science, 68, 53-74.
- Dennett, D. C. (1987). The Intentional Stance. Cambridge, Mass.: MIT Press.
- Fodor, J. A. (1981). Representations : Philosophical Essays on the Foundations of Cognitive Science (1st MIT Press ed.). Cambridge, Mass.: MIT Press.
- Kawato, M., & Wolpert, D. M. (1998). Internal Models for Motor Control. Novartis Found Symp, 218, 291-304; discussion 304-297.
- Knutson, B., Rick, S., Wimmer, G. E., Prelec, D., & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156.
- Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking About Mechanisms. Philosophy of Science, 67, 1-24.
- Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
- Malle, B. F., & Knobe, J. (2001). The Distinction between Desire and Intention: A Folk-Conceptual Analysis. In B. F. M. L. J. Moses & D. A. Baldwin (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 45-67). Cambridge, MA: MIT Press.
- McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron, 44(2), 379-387.
- Nisbett, R. E., & Wilson, T. D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84, 231-259.
- Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the World. Princeton, N.J.: Princeton University Press.
- Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
- Wilson, T. D., & Dunn, E. W. (2004). Self-Knowledge: Its Limits, Value, and Potential for Improvement. Annual Review of Psychology, 55(1), 493-518.
- Wolpert, D. M., & Kawato, M. (1998). Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11(7-8), 1317.