Justin Hart (Computer Science) and his adviser Brian Scassellati are pursuing research into human self-awareness using a humanoid robot, Nico. Their work stands at the crossroads of computer science and psychology and complements the work of psychologists, neuroscientists, sociologists, and anthropologists. They use computational models to emulate human social behavior, and in the process, understand it in new ways. Recently, the lab (along with MIT, Stanford, and USC) won a $10 million grant from the National Science Foundation to create "socially assistive" robots that can serve as companions for children with special needs. These robots will help with everything from cognitive skills to getting the right amount of exercise.
Justin’s thesis research focuses primarily on “robots autonomously learning about their bodies and senses,” but he also explores human-robot interaction, “including projects on social presence, attributions of intentionality, and people’s perception of robots.”
In “Robotic Self-Models Inspired by Human Development,” a paper that he co-authored with Scassellati, Justin explains, “Our work is inspired by developmental psychology and neuroscience, and seeks to both improve the state-of-the-art in robotics by incorporating the ‘self’ into robotic reasoning processes, as well as further our knowledge of metacognition by modeling these forms that are found in humans.”
Recasting the problems of robotics in a self-reflective light, Justin’s goal is to enable Nico to interact with its environment by learning about itself, and using this self-model, to reason about tasks.
“Consider, for instance, the interaction between a block and the robot’s gripper. While the gripper is always under its control, the block is temporarily under the robot’s control when in the gripper. By modeling the causal relationship between the gripper and the block, the robot will be able to learn how it is able to manipulate objects in its environment. Additionally, objects have causal relationships between each other. Blocks can be stacked on each other, if a ball in motion collides with another ball, it will cause that ball to move. By learning these object-to-object relations, the robot can learn, not only what manipulations are possible on its environment, but also, through this chain of causal relations, tool use.”
Previous researchers have built robots that acquire knowledge of the external world through experience, but Nico is different from those that have preceded it. “Knowledge about the robot itself has generally been built in by the designer,” Justin says. “None of these representations offer the flexibility, robustness, and functionality that are present in people.”
Using the robot, Justin seeks to “emulate forms of self-awareness developed during human infancy. In particular, we are interested in the ability to reason about the robot’s embodiment and physical capabilities, with the robot building a model of itself through its experiences.” Programmed to observe its own body as it moves through space, Nico learns the relationship of its end-effectors (grippers, for example) and sensors (stereoscopic cameras) to each other and the environment. It combines models of its perceptual and motor capabilities, to learn where its body parts exist with respect to each other and will soon learn how those body parts are able to cause changes by interacting with objects in the environment.
One of his papers on this topic, “Mirror Perspective-Taking with a Humanoid Robot,” was recently accepted for presentation at the 26th Annual Conference on Artificial Intelligence, to be held in July in Toronto, Canada. “Mirror Perspective-Taking” describes how the robot is able to use self-knowledge regarding its body and senses to interpret what it sees when it interacts with a mirror, “allowing the interpretation of reflections in the mirror.” Nico, using knowledge that it has learned about itself, is able to use a mirror as an instrument for spatial reasoning, allowing it to accurately determine where objects are located in space based on their reflections, rather than naively believing them to exist behind the mirror.
An object reflected in a mirror is “a reflection of what actually exists in space,” Justin says. “If one were to naively reach towards these reflections, one’s hand would hit the glass of the mirror, rather than the object being reached for. By understanding this reflection, however, one is able to use the mirror as an instrument to make accurate inferences about the positions of objects in space based on their reflected appearances. When we check the rearview mirror on our car for approaching vehicles or use a bathroom mirror to aim a hairbrush, we make such instrumental use of these mirrors.”
Justin and his adviser have developed a model that allows for the robot to “determine a perspective that is consistent with its point of view when looking into a mirror. To do so, it uses self-knowledge about its body and senses in the form of kinematic and visual calibration information. To our knowledge, this is the first robotic system to attempt to use a mirror in this way, representing a significant step towards a cohesive architecture that allows robots to learn about their bodies and appearances through self-observation, and an important capability required in order to pass the Mirror Test.”
The classic mirror test has previously been done with animals to determine whether they understand that their reflections are actually images of themselves. The subject animals are allowed to familiarize themselves with a mirror. They are then sedated and a spot of dye is put on their faces. When they awaken, if they notice the new spot of color in their reflection and then touch the place on their face where the dye was put, they “pass” the mirror test.
So far, no robot has successfully met this challenge, but the Social Robotics Lab is working on it.