Adaptive natural gradient method for learning of stochastic neural networks in mini-batch mode

Hyeyoung Park, Kwanyong Lee

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods.

Original languageEnglish
Article number4568
JournalApplied Sciences (Switzerland)
Volume9
Issue number21
DOIs
StatePublished - 1 Nov 2019

Keywords

  • Gradient descent learning algorithm
  • Mini-batch learning mode
  • Natural gradient
  • Online learning mode
  • Stochastic neural networks

Fingerprint

Dive into the research topics of 'Adaptive natural gradient method for learning of stochastic neural networks in mini-batch mode'. Together they form a unique fingerprint.

Cite this