First, what robots can do today? Well a lot of things. The winner of a recent contest sponsored by iRobot can:
- retrieve water
- water plants
- fill a dog water bowl
- control a VCR and TV
- turn lights and other appliances on/off
- play music
- dance and entertain
- provide mobile video security for the home
- remind the elderly to take medicine
the perfect roommate!
Look how humanoid robots get credible. Here are Sony dancing robots:
an "actroid" robot: (youtube source)
You felt weird when you saw here? That is what roboticist call the uncanny valley.:
as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion. However, as the appearance and motion continue to become less distinguishable from a human being, the emotional response becomes positive once more and approaches human-to-human empathy levels.
It may have to do with our folk-psychology. Gray et al. (2007) conducted a series of experiments that aimed at understanding our intuitive conception of "mindedness". (see this post) Subjects answered a web survey about pairwise comparison on a five-point scale of different human and non-human characters on many mental capacities. For instance they had to decide who, between a baby and a chimpanzee, could feel more pain. The experimenters then identified the factors able to account for the results. The statistical analysis (a principal component factor analysis) revealed that the similarity space of the “agent” concept is made of an Agency dimension (self-control, planning, communication, etc.) and an Experience dimension (pain, pleasure, consciousness, etc.). Babies, dogs and chimpanzees score high in Experience, but low in Agency; adult humans score high on both dimensions, while God scores high in Agency only. Robots have a middle score in agency (between dead people and God) and a null score on experience. So the uncanny feeling might have something to do with fact that we usually have a revulsion for dead bodies; a humanoid robots might triggers the same mechanisms.
Honda's Asimo (less uncanny) is also getting smarter: see how (he? she? it?) navigates his/her/its environment and interact with humans.
What about the future?
According to David Levy, in his new book Love and Sex with Robots. The Evolution of Human-Robot Relationships, people will one day have sex with robots (remember Jude Law in Artificial Intelligence?).
"There will be different personalities and different likes and dislikes," he said. "When you buy your robot, you'll be able to select what kind of personality it will have. It'll be like ordering something on the Internet. What kind of emotional makeup will it have? How it should look. The size and hair color. The sound of its voice. Whether it's funny, emotional, conservative.
when? In about 30 years. And that is where I start laughing (loudly). Since the begining of AI, researchers always promise that in the next 30 or 50 years machines will be thinking. See what Turing said in 1950:
I believe that in about fifty years' time it will be possible, to programme computers, (..,) to make them play the imitation game [The Turing Test] so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning
Find and old scientific american or any other science journal, and you will see that computer scientist routinely make the same predictions. Always between 30 and 50 years. Enough time to be forgotten if you're wrong, but just enough to still be there and say "I told you!" if you're wright.
Not that I don't believe that machine can be conscious or intelligent. I do. We are all conscious evolved machine, so a priori, there are no reasons why AI (or more probably, A-Life) could not succed. The thing is just that people should remember that futurology in not an exact science...
One thing, however, that I am ready to put my money on, is the future of robotic warfare. Trust the army for that, if robots can be an advantage in a war, they will develop it. A recent article in the Armed Forces Journal envisioned futuristic scenarios of robotic warfare (not for futurology, however, but for ethics). I liked the conclusion, though:
There are no new ethical dilemmas created by robots being used today. Every system is either under continuous, direct control by humans, or performing limited semiautonomous actions that are dictated by humans. These robots represent nothing more than highly sophisticated weapons, and the user retains full responsibility for their implementation.
The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely auton¬omous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.
It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.