Adaptive natural gradient learning algorithms for various stochastic models

H. Park, S. I. Amari, K. Fukumizu

Research output: Contribution to journalArticlepeer-review

133 Scopus citations

Abstract

The natural gradient method has an ideal dynamic behavior which resolves the slow learning speed of the standard gradient descent method caused by plateaus. However, it is required to calculate the Fisher information matrix and its inverse, which makes the implementation of the natural gradient almost impossible. To solve this problem, a preliminary study has been proposed concerning an adaptive method of calculating an estimate of the inverse of the Fisher information matrix, which is called the adaptive natural gradient learning method. In this paper, we show that the adaptive natural gradient method can be extended to be applicable to a wide class of stochastic models: regression with an arbitrary noise model and classification with an arbitrary number of classes. We give explicit forms of the adaptive natural gradient for these models. We confirm the practical advantage of the proposed algorithms through computational experiments on benchmark problems. Copyright (C) 2000 Elsevier Science Ltd.

Original languageEnglish
Pages (from-to)755-764
Number of pages10
JournalNeural Networks
Volume13
Issue number7
DOIs
StatePublished - 2000

Keywords

  • Adaptive natural gradient learning
  • Feedforward neural network
  • Gradient descent learning
  • Natural gradient learning
  • Plateau problem

Fingerprint

Dive into the research topics of 'Adaptive natural gradient learning algorithms for various stochastic models'. Together they form a unique fingerprint.

Cite this