UTAIL

English UT

Home

News

Member

Mission

Research

Publications

Joining Lab

Access

Links

Square
edgeedge

Year: 2007-
Member:
Tomoya Suzuki
Kim Gukleem
Masaaki Iwaoka
Kenji Suzuki
Partners:
- SSSA, Italy
Tags:
- Cognitive Robotics
- Cybernics

 
Active and multimodal sensing by a humanoid
Learning from Long-term and Multimodal Interaction between Human and Humanoid Robot

 

We have been developing synthetic methodology for understanding the human-humanoid interaction and building an environment where the humanoid robot learns from a long-term and continuous interaction. Along the sketched scenario, we built a humanoid robot Genie which aims at operating for 24 hours and 365 days to learn from interaction with the surrounding environment including humans.

It is considered that the key issue for future cognitive machines is the internal representation of spatio-temporal patterns as well as external representation. In addition, the role of motivation of robots or machines is really important and significant point of view for understanding action-oriented perception. When the robot needs to explain why a particular move was made, the robot needs to explain its motivation. The reasoning is therefore much tied to the task achievement. For example, we humans cannot recognize an object without biases, such as pre-knowledge of the object and environment.

We are currently developing another synthetic methodology for understanding dynamics of objects by a humanoid robot in a psychological framework. The action selection mechanism of the humanoid robot is implemented based on the selection probability consisting of several internal variables that takes insights from human's motivation mechanism. This makes the robot classify its action space by sensory feedback caused by its own action. The system generates a priority to actions which are expected to cause distinguishing sensory patterns. The action selection probability allows the robot to explore unknown spaces for understanding dynamics of objects in a real environment. The preliminary experimental results show that the system allows the robot to build knowledge about the common object movement by repeating interaction with several different types of object, and also to predict the movement of unknown objects.


 


This work is partly supported by Grants-in-Aid for Scientific Research, MEXT, Japan.


This study was supported in part by the Global COE Program on "Cybernics: fusion of human, machine, and information systems.”

artslab
This is a collaboration work with Scuola Superiore Sant'Anna, Italy.

     
Publications
  • Suzuki, K., Harada, A. and Suzuki, T., "Learning from Long-term and Multimodal Interaction between Human and Humanoid Robot," Proc. of Intl. Conf. of the IEEE Industrial Electronics Society (IECON2008), USA, pp. 3419-3424, 2008.
  • Jin, G., Suzuki, K., "Learning from object motion using visual saliency and speech phonemes by humanoid robot," Proc. of IEEE Intl. Conf. on Robotics and Biomimetics (ROBIO), pp. 1495-1500, 2009.
  • Suzuki, T. and Suzuki, K., "Motivation oriented action selection for understanding dynamics of objects," Proc. of 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, pp. 846-851, 2008.
   
     
Related Projects

 


  © 2005-2011 Artificial Intelligent Laboratory, University of Tsukuba, Japan