TY - GEN
T1 - An efficient learning algorithm using natural gradient and second order information of error surface
AU - Park, Hyeyoung
AU - Fukumizu, Kenji
AU - Amari, Shun Ichi
AU - Lee, Yillbyung
PY - 2000
Y1 - 2000
N2 - Natural gradient learning algorithm, which originated from information geometry, is known to provide a good solution for the problem of slow learning speed of gradient descent learning methods. Whereas the natural gradient learning algorithm is inspired from the geometric structure of the space of learning systems, there have been other approaches to acceleration of learning by using the second order information of error surface. Although the second order methods cannot give as successful solutions as the natural gradient learning method, their results showed the usefulness of the second order information of error surface in the learning process. In this paper, we develop a method of combining these two different approaches to propose a more efficient learning algorithm. At each learning step, we calculate a search direction by means of the natural gradient. When we apply the search direction to parameter-updating process, the second order information of error surface is applied to determine an efficient learning rate. Through a simple experiment on a real world problem, we confirmed that the proposed learning algorithm show faster convergence than the pure natural gradient learning algorithm.
AB - Natural gradient learning algorithm, which originated from information geometry, is known to provide a good solution for the problem of slow learning speed of gradient descent learning methods. Whereas the natural gradient learning algorithm is inspired from the geometric structure of the space of learning systems, there have been other approaches to acceleration of learning by using the second order information of error surface. Although the second order methods cannot give as successful solutions as the natural gradient learning method, their results showed the usefulness of the second order information of error surface in the learning process. In this paper, we develop a method of combining these two different approaches to propose a more efficient learning algorithm. At each learning step, we calculate a search direction by means of the natural gradient. When we apply the search direction to parameter-updating process, the second order information of error surface is applied to determine an efficient learning rate. Through a simple experiment on a real world problem, we confirmed that the proposed learning algorithm show faster convergence than the pure natural gradient learning algorithm.
UR - http://www.scopus.com/inward/record.url?scp=84867774475&partnerID=8YFLogxK
U2 - 10.1007/3-540-44533-1_23
DO - 10.1007/3-540-44533-1_23
M3 - Conference contribution
AN - SCOPUS:84867774475
SN - 3540679251
SN - 9783540679257
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 199
EP - 207
BT - PRICAI 2000, Topics in Artificial Intelligence - 6th Pacific Rim International Conference on Artificial Intelligence, Proceedings
A2 - Mizoguchi, Riichiro
A2 - Slaney, John
PB - Springer Verlag
T2 - 6th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2000
Y2 - 28 August 2000 through 1 September 2000
ER -