Abstract

Appropriate weight initialization settings, along with the ReLU activation function, have become cornerstones of modern deep learning, enabling the training and deployment of highly effective and efficient neural network models across diverse areas of artificial intelligence. The problem of “dying ReLU,” where ReLU neurons become inactive and yield zero output, presents a significant challenge in the training of deep neural networks with ReLU activation function. Theoretical research and various methods have been introduced to address the problem. However, even with these methods and research, training remains challenging for extremely deep and narrow feedforward networks with ReLU activation function. In this paper, we propose a novel weight initialization method to address this issue. We establish several properties of our initial weight matrix and demonstrate how these properties enable the effective propagation of signal vectors. Through a series of experiments and comparisons with existing methods, we demonstrate the effectiveness of the novel initialization method.

Original languageEnglish
Article number106362
JournalNeural Networks
Volume176
DOIs
StatePublished - Aug 2024

Keywords

  • Deep learning
  • Feedforward neural networks
  • Initial weight matrix
  • ReLU activation function
  • Weight initialization

Fingerprint

Dive into the research topics of 'Improved weight initialization for deep and narrow feedforward neural network'. Together they form a unique fingerprint.

Cite this