TY - JOUR
T1 - A self-trainable depth perception method from eye pursuit and motion parallax
AU - Prucksakorn, Tanapol
AU - Jeong, Sungmoon
AU - Chong, Nak Young
N1 - Publisher Copyright:
© 2018 Elsevier B.V.
PY - 2018/11
Y1 - 2018/11
N2 - When humans move in a lateral direction (frontal plane), they intuitively understand the motion parallax phenomenon while jointly developing sensory neurons and pursuit eye movements with the help of their life-long learning experiences. At that time, various ranges of motion parallax effects are used to extract meaningful pieces of information such as relative depth of variously positioned objects and the spatial separation between the robot and the fixating object (absolute distance). By mimicking the visual learning in mammals to realize an autonomous robot system, a visual learning framework (Prucksakorn, 2016) was proposed to concurrently develop both visual sensory coding and pursuit eye movement with an addition of depth perception. Within the proposed framework, an artificial neural network was used to learn the relationship between the eye movements and the absolute distance. Nonetheless, the limitation of the proposed framework is that the predefined single lateral body movement cannot fully evoke the motion parallax effect for depth perception. Here, we extend the presented visual learning framework to accurately and autonomously represent the various ranges of absolute distance by using pursuit eye movements from multiple lateral body movements. We show that the proposed model, which is implemented in a HOAP3 humanoid robot simulator, can successfully enhance the smooth pursuit eye movement control with the self-calibrating ability and the distance estimation comparing to the single lateral movement based approach.
AB - When humans move in a lateral direction (frontal plane), they intuitively understand the motion parallax phenomenon while jointly developing sensory neurons and pursuit eye movements with the help of their life-long learning experiences. At that time, various ranges of motion parallax effects are used to extract meaningful pieces of information such as relative depth of variously positioned objects and the spatial separation between the robot and the fixating object (absolute distance). By mimicking the visual learning in mammals to realize an autonomous robot system, a visual learning framework (Prucksakorn, 2016) was proposed to concurrently develop both visual sensory coding and pursuit eye movement with an addition of depth perception. Within the proposed framework, an artificial neural network was used to learn the relationship between the eye movements and the absolute distance. Nonetheless, the limitation of the proposed framework is that the predefined single lateral body movement cannot fully evoke the motion parallax effect for depth perception. Here, we extend the presented visual learning framework to accurately and autonomously represent the various ranges of absolute distance by using pursuit eye movements from multiple lateral body movements. We show that the proposed model, which is implemented in a HOAP3 humanoid robot simulator, can successfully enhance the smooth pursuit eye movement control with the self-calibrating ability and the distance estimation comparing to the single lateral movement based approach.
KW - Active depth perception
KW - Developmental vision
KW - Eye pursuit
KW - Motion parallax
KW - Sensory-motor coordination
UR - http://www.scopus.com/inward/record.url?scp=85052896843&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2018.08.009
DO - 10.1016/j.robot.2018.08.009
M3 - Article
AN - SCOPUS:85052896843
SN - 0921-8890
VL - 109
SP - 27
EP - 37
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
ER -