Why people can use language and communicate each other? How can we develop a robot which communicates with people in our daily life?
We consider a symbolic system as an “emergent property” of human society in which people interact physically and socially. Symbols, including language, are not a static gift given by an unknown being, but dynamic phenomena. We take a constructive approach towards symbol emergent system. Recently, we call the academic field “symbol emergence in robotics” where we try to develop a developmental autonomous robot which can obtain language in bottom-up manner.
Human children obtain various behaviors by observing the daily activities of their parents. In addition, children learn many words and phrases through listening to their parents’ discourse. Is it possible for us to build a robot that can obtain various behaviors and words incrementally? Human children learn languages naturally in their daily lives. However, this is still a very difficult challenge for current robotics research. We have recently been focusing on the double articulation structure that human motion and spoken language implicitly have. We are developing a developmental robot that can learn languages and behaviors through natural interactions.
1.Tadahiro Taniguchi, Keita Hamahata, and Naoto Iwahashi, Unsupervised Segmentation of Human Motion Data Using Sticky HDP-HMM and MDL-based Chunking Method for Imitation Learning, Advanced Robotics, Vol.25 (17), 2143–2172. (2011) [PDF]
2. Tadahiro Taniguchi and Shogo Nagasaka, Double Articulation Analyzer for Unsegmented Human Motion using Pitman-Yor Language model and Infinite Hidden Markov Model, 2011 IEEE/SICE International Symposium on System Integration.(2011) [PDF]
The subjective world of an autonomous robot is totally different from that of humans. Therefore, a symbolic system that is suitable for an autonomous robot might be different from that of humans. Multimodal categorization and concept formation is a series of research in which we try to create a robot that can obtain its concepts and categories on its own. We hope that such a constructive approach will unveil the mystery of the language acquisition process of human infants and the brain mechanism for language acquisition.
1. Takaya Araki, Tomoaki Nakamura, Takayuki Nagai, Shogo Nagasaka, Tadahiro Taniguchi, and Naoto Iwahashi
Online Learning of Concepts and Words Using Multimodal LDA and Hierarchical Pitman-Yor Language Model
IEEE/RSJ International Conference on Intelligent Robots and Systems 2012 (IROS 2012), 1623-1630 .(2012)
2. Tadahiro Taniguchi and Tetsuo Sawaragi,
Incremental Acquisition of Behaviors and Signs based on Reinforcement Learning Schema Model and STDP
Advanced Robotics, Vol.21 (10), pp. 1177-1199 .(2007) [PDF]
A robot usually cannot directly and clearly perceive its location. This is usually considered a problem in robotics. However, human seem to abstractly perceive their location to some extent. In other words, humans recognize their location in a more symbolic way, e.g., “in front of the table”, “in the toilet”, and “on the sofa”. We are trying to combine the language acquisition process of such words for location with the localization and mapping process. We will develop a simultaneous localization, mapping, and language acquisition technique that will enable a robot to have a more integrated concept of location.
1. Akira Taniguchi, Haruki Yoshizaki, Tetsunarui Inamura, and Tadahiro Taniguchi, Research on Simul-
taneous Estimation of Self-Location and Location Concepts, Transactions of the Institute of Systems,
Control and Information Engineers, accepted. (in Japanese)