TY - GEN
T1 - A Joint Learning Framework of Visual Sensory Representation, Eye Movements and Depth Representation for Developmental Robotic Agents
AU - Prucksakorn, Tanapol
AU - Jeong, Sungmoon
AU - Chong, Nak Young
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - In this paper, we propose a novel visual learning framework for developmental robotics agents which mimics the developmental learning concept from human infants. It can be applied to an agent to autonomously perceive depths by simultaneously developing its visual sensory representation, eye movement control, and depth representation knowledge through integrating multiple visual depth cues during self-induced lateral body movement. Based on the active efficient coding theory (AEC), a sparse coding and a reinforcement learning are tightly coupled with each other by sharing a unify cost function to update the performance of the sensory coding model and eye motor control. The generated multiple eye motor control signals for different visual depth cues are used together as inputs for the multi-layer neural networks for representing the given depth from simple human-robot interaction. We have shown that the proposed learning framework, which is implemented on the Hoap-3 humanoid robot simulator, can effectively learn to autonomously develop the sensory visual representation, eye motor control, and depth perception with self-calibrating ability at the same time.
AB - In this paper, we propose a novel visual learning framework for developmental robotics agents which mimics the developmental learning concept from human infants. It can be applied to an agent to autonomously perceive depths by simultaneously developing its visual sensory representation, eye movement control, and depth representation knowledge through integrating multiple visual depth cues during self-induced lateral body movement. Based on the active efficient coding theory (AEC), a sparse coding and a reinforcement learning are tightly coupled with each other by sharing a unify cost function to update the performance of the sensory coding model and eye motor control. The generated multiple eye motor control signals for different visual depth cues are used together as inputs for the multi-layer neural networks for representing the given depth from simple human-robot interaction. We have shown that the proposed learning framework, which is implemented on the Hoap-3 humanoid robot simulator, can effectively learn to autonomously develop the sensory visual representation, eye motor control, and depth perception with self-calibrating ability at the same time.
UR - http://www.scopus.com/inward/record.url?scp=85035231422&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-70090-8_88
DO - 10.1007/978-3-319-70090-8_88
M3 - Conference contribution
AN - SCOPUS:85035231422
SN - 9783319700892
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 867
EP - 876
BT - Neural Information Processing - 24th International Conference, ICONIP 2017, Proceedings
A2 - Liu, Derong
A2 - Xie, Shengli
A2 - El-Alfy, El-Sayed M.
A2 - Zhao, Dongbin
A2 - Li, Yuanqing
PB - Springer Verlag
T2 - 24th International Conference on Neural Information Processing, ICONIP 2017
Y2 - 14 November 2017 through 18 November 2017
ER -