Natural Rationality | decision-making in the economy of nature
Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

1/14/08

On The Folk Concept of Robotic Pain

Bryce Huebner report, on the X-Phi Blog, the results of an experiment where he asked volunteers about someone---Dave0--who was either a (normal) human, a human with a CPU in the head a normal robot, or a robot with a brain instead of a CPU. Can Dave feel pain, can Dave be happy, etc. Interestingly, he founds, among other that

(....) volunteers thought that it was pretty unlikely that something with a hard metallic body could count as a locus of pain. But, they were not particularly bothered by the ascription of pain to a human with a CPU instead of a brain.
(...) there was a significant difference in volunteers' ascriptions of belief and happiness when Dave had a human body and a CPU instead of a brain;


read his post or his description of the experiments or the paper he wrote about that



12/5/07

The Present and Future of Artificial Rationality

A little digression from topics usually discussed here, but I thought it might be interesting.

First, what robots can do today? Well a lot of things. The winner of a recent contest sponsored by iRobot can:

- retrieve water
- water plants
- fill a dog water bowl
- control a VCR and TV
- turn lights and other appliances on/off
- play music
- dance and entertain
- provide mobile video security for the home
- remind the elderly to take medicine

the perfect roommate!


Look how humanoid robots get credible. Here are Sony dancing robots:
(you tube)



an "actroid" robot: (youtube source)



You felt weird when you saw here? That is what roboticist call the uncanny valley.:

as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion. However, as the appearance and motion continue to become less distinguishable from a human being, the emotional response becomes positive once more and approaches human-to-human empathy levels.


It may have to do with our folk-psychology. Gray et al. (2007) conducted a series of experiments that aimed at understanding our intuitive conception of "mindedness". (see this post) Subjects answered a web survey about pairwise comparison on a five-point scale of different human and non-human characters on many mental capacities. For instance they had to decide who, between a baby and a chimpanzee, could feel more pain. The experimenters then identified the factors able to account for the results. The statistical analysis (a principal component factor analysis) revealed that the similarity space of the “agent” concept is made of an Agency dimension (self-control, planning, communication, etc.) and an Experience dimension (pain, pleasure, consciousness, etc.). Babies, dogs and chimpanzees score high in Experience, but low in Agency; adult humans score high on both dimensions, while God scores high in Agency only. Robots have a middle score in agency (between dead people and God) and a null score on experience. So the uncanny feeling might have something to do with fact that we usually have a revulsion for dead bodies; a humanoid robots might triggers the same mechanisms.



Honda's Asimo (less uncanny) is also getting smarter: see how (he? she? it?) navigates his/her/its environment and interact with humans.

(youtube source)



What about the future?

According to David Levy, in his new book Love and Sex with Robots. The Evolution of Human-Robot Relationships, people will one day have sex with robots (remember Jude Law in Artificial Intelligence?).

"There will be different personalities and different likes and dislikes," he said. "When you buy your robot, you'll be able to select what kind of personality it will have. It'll be like ordering something on the Internet. What kind of emotional makeup will it have? How it should look. The size and hair color. The sound of its voice. Whether it's funny, emotional, conservative.

(source)


when? In about 30 years. And that is where I start laughing (loudly). Since the begining of AI, researchers always promise that in the next 30 or 50 years machines will be thinking. See what Turing said in 1950:

I believe that in about fifty years' time it will be possible, to programme computers, (..,) to make them play the imitation game [The Turing Test] so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning


Find and old scientific american or any other science journal, and you will see that computer scientist routinely make the same predictions. Always between 30 and 50 years. Enough time to be forgotten if you're wrong, but just enough to still be there and say "I told you!" if you're wright.

Not that I don't believe that machine can be conscious or intelligent. I do. We are all conscious evolved machine, so a priori, there are no reasons why AI (or more probably, A-Life) could not succed. The thing is just that people should remember that futurology in not an exact science...

One thing, however, that I am ready to put my money on, is the future of robotic warfare. Trust the army for that, if robots can be an advantage in a war, they will develop it. A recent article in the Armed Forces Journal envisioned futuristic scenarios of robotic warfare (not for futurology, however, but for ethics). I liked the conclusion, though:

There are no new ethical dilemmas created by robots being used today. Every system is either under continuous, direct control by humans, or performing limited semiautonomous actions that are dictated by humans. These robots represent nothing more than highly sophisticated weapons, and the user retains full responsibility for their implementation.

The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely auton¬omous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.

It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.










11/13/07

Decision-Making in Robotics and Psychology: A Distributed Account

Forthcoming a special issue of New Ideas in Psychology on Cognitive Robotics & Theoretical Psychology, edited by Tom Ziemke & Mark Bickhard:

Hardy-Vallée, B. (in press). Decision-Making in Robotics and Psychology: A Distributed Account. New Ideas in Psychology

Decision-making is usually a secondary topic in psychology, relegated to the last chapters of textbooks. The psychological study of decision-making assumes a certain conception of its nature and mechanisms that has been shown wrong by research in robotics. Robotics indicates that decision-making is not—or at least not only—an intellectual task, but also a process of dynamic behavioral control, mediated by embodied and situated sensorimotor interaction. The implications of this conception for psychology are discussed.
[PDF]



10/4/07

A distributed conception of decision-making

In a previous post, I suggested that there is something wrong with the standard (“cogitative”) conception of decision-making in psychology. In this post, I would like to outline an alternative conception, what we might call the “distributed conception”.

A close look at robotics suggests that decision-making should not be construed as a deliberative process. Deliberative control (Mataric, 1997) or sense-model-plan-act (SMPA) architectures have been unsuccessful in controlling autonomous robots (Brooks, 1999; Pfeifer & Scheier, 1999). In these architectures, (e.g. Nilsson, 1984), “what to do?” was represented as a logical problem. Sensors or cameras represented the perceptible environment while internal processors converted sensory inputs in first-order predicate calculus. From this explicit model of its environment, the robot’s central planner transformed a symbolic description of the world into a sequence of actions (see Hu & Brady, 1996, for a survey). Decision-making was taken in charge by an expert system or a similar device. Thus the flow of information is one-way only: sensors → model → planner → effectors.

SMPA architectures could be effective, but only in environment carefully designed for the robot. The colors, lightning and objects disposition were optimally configured for simplifying perception and movement. Brooks describes how the rooms where autonomous robots evolve were optimally configured:

The walls were of a uniform color and carefully lighted, with dark rubber baseboards, making clear boundaries with the lighter colored floor. (…) The blocks and wedges were painted different colors on different planar surfaces. (….) Blocks and wedges were relatively rare in the environment, eliminating problems due to partial obscurations (Brooks, 1999, p. 62)

Thus the cogitative conception of decision-making, and its SMPA implementations, had to be abandoned. If it did not work for mobile robots, it is justified to argue that for cognitive agents in general the cogitative conception also has to be abandoned. Agents do not make decisions simply by central planning and explicit models manipulations, but by coordinating multiple sensorimotor mechanisms. In order to design robots able to imitate people, for instance, roboticists build systems that control their behavior through multiple partial models. Mataric (2002) robots, for instance, learn to imitate by coordinating the following modules:

  1. a selective attentional mechanisms that extract salient visual information (other agent's face, for instance)
  2. a sensorimotor mapping system that transforms visual input in motor program
  3. a repertoire of motor primitives
  4. a classification-based learning mechanism that learns from visuo-motor mappings

Neuroeconomics also suggests another--similar--avenue: there is no brain area, circuit or mechanisms specialized in decision-making, but rather a collection of neural modules. Certain area specializes in visual-saccadic decision-making (Platt & Glimcher, 1999). Social neuroeconomics indicates that decision in experimental games are mainly affective computations: choice behavior in these games is reliabely correlated to neural activations of social emotions such as the ‘warm glow’ of cooperation (Rilling et al., 2002), the ‘sweet taste’ of revenge (de Quervain et al., 2004) or the ‘moral disgust’ of unfairness (Sanfey et al., 2003). Subjects without affective experiences or affective anticipations are unable to make rational decision, as Damasio and his colleagues discovered. Damasio found that subjects with lesions in the ventromedial prefrontal cortex (vmPFC, a brain area above the eye sockets) had huge problems in coping with everyday tasks (Damasio, 1994). They were unable to plan meetings; they lose their money, family or social status. They were, however, completely functional in reasoning or problem-solving task. Moreover, Damasio and its collaborators found that these subjects had lower affective reactions. They did not felt sad for their situation, even if they perfectly understood what “sad” means, and seemed unable to learn from bad experiences. The researchers concluded that these subjects were unable to use emotions to aid in decision-making, a hypothesis that also implies that in normal subjects, emotions do aid in decision-making.

Consequently, the “Distributed Conception of Decision-making” suggest that making is:

Sensorimotor: the mechanisms for decision-making are not only and not necessarily intellectual, high-level and explicit. Decision-making is the whole organism’s sensorimotor control.
Situated: a decision is not a step-by-step internal computation, but also a continuous and dynamic adjustment between the agent and its environment that develop in the whole lifespan. Decision-making is always physically and (most of the time) socially situated: ecological situatedness is both a constraint on, and a set of informational resources that helps agent to cope with, decision-making.
Psychology should do more than documenting our inability to follow Bayesian reasoning in paper-and-pen experiment, but study our sensorimotor situated control capacities. Decision-making should not be a secondary topics for psychology but, following Gintis “the central organizing principle of psychology” (Gintis, 2007, p. 1). Decision-making is more than an activity we consciously engage in occasionally : it is rather the very condition of existence (as Herrnstein, said “all behaviour is choice” (Herrnstein, 1961).

Therefore, deciding should not be studied like a separate topic (e.g. perception), or an occasional activity (e.g. chess-playing) or a high-level competence (e.g. logical inference), but with robotic control. A complete, explicit model of the environment, manipulated by a central planner, is not useful for robots. New Robotics (Brooks, 1999) revealed that effective and efficient decision-making is achieved through multiple partial models updated in real-time. There is no need to integrate models in a unified representations or a common code: distributed architectures, were many processes runs in parallel, achieve better results. As Barsalou et al. (2007) argue, Cognition is coordinated non-cognition; similarly, decision-making is coordinated non-decision-making.

If decision-making is the central organizing principle of psychology, all the branches of psychology could be understood as research fields that investigate different aspects of decision-making. Abnormal psychology explains how deficient mechanisms impair decision-making. Behavioral psychology focuses on choice behavior and behavioral regularities. Cognitive psychology describes the mechanisms of valuation, goal representation, preferences and how they contribute to decision-making. Comparative psychology analyzes the variations in neural, behavioral and cognitive processes among different clades. Developmental psychology establishes the evolution of decision-making mechanisms in the lifespan. Neuropsychology identify the neural substrates of these mechanisms. Personality psychology explains interindividual variations in decision-making, our various decision-making “profiles”. Social psychology can shed lights on social decision-making, that is, either collective decision-making (when groups or institutions make decisions) or individual decision-making in social context. Finally, we could also add environmental psychology (how agents use their environment to simplify their decisions) and evolutionary psychology (how decision-making mechanisms are – or are not – adaptations).

Related posts:


References
  • Barsalou, Breazeal, & Smith. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8(2), 79-91.
  • Brooks, R. A. (1999). Cambrian Intelligence : The Early History of the New Ai. Cambridge, Mass.: MIT Press.
  • Churchland, P. M., & Churchland, P. S. (1998). On the Contrary : Critical Essays, 1987-1997. Cambridge, Mass.: MIT Press.
  • Damasio, A. R. (1994). Descartes' Error : Emotion, Reason, and the Human Brain. New York: Putnam.
  • de Quervain, D. J., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., & Fehr, E. (2004). The Neural Basis of Altruistic Punishment. Science, 305(5688), 1254-1258.
  • Gintis, H. (2007). A Framework for the Integration of the Behavioral Sciences (with Open Commentaries and Author's Response). Behavioral and Brain Sciences, 30, 1-61.
  • Herrnstein, R. J. (1961). Relative and Absolute Strength of Response as a Function of Frequency of Reinforcement. J Exp Anal Behav., 4(4), 267–272.
  • Hu, H., & Brady, M. (1996). A Parallel Processing Architecture for Sensor-Based Control of Intelligent Mobile Robots. Robotics and Autonomous Systems, 17(4), 235-257.
  • Malle, B. F., & Knobe, J. (1997). The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33, 101-112.
  • Mataric, M. J. (1997). Behaviour-Based Control: Examples from Navigation, Learning, and Group Behaviour. Journal of Experimental & Theoretical Artificial Intelligence, 9(2 - 3), 323-336.
  • Mataric, M. J. (2002). Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. Imitation in Animals and Artifacts, 391–422.
  • Nilsson, N. J. (1984). Shakey the Robot: SRI International.
  • Pfeifer, R., & Scheier, C. (1999). Understanding Intelligence. Cambridge, Mass.: MIT Press.
  • Platt, M. L., & Glimcher, P. W. (1999). Neural Correlates of Decision Variables in Parietal Cortex. Nature, 400(6741), 238.
  • Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A Neural Basis for Social Cooperation. Neuron, 35(2), 395-405.
  • Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300(5626), 1755-1758.