TY - JOUR
T1 - Direct Feedback Learning With Local Alignment Support
AU - Yang, Heesung
AU - Lee, Soha
AU - Park, Hyeyoung
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - While backpropagation (BP) algorithm has been pivotal in enabling the success of modern deep learning technologies, it encounters challenges related to computational inefficiency and biological implausibility. Especially, the sequential propagation of error signals using forward weights in BP is not biologically plausible and prevents efficient parallel updates of learning parameters. To solve these problems, the direct feedback alignment (DFA) method is proposed to directly propagate the error signal from output layer to each hidden layer through random feedback weight, but the performance of DFA is still not competent to BP, especially in complicate tasks with large number of outputs and the convolutional neural network models. In this paper, we propose a method to adjust the feedback weights in DFA using additional local modules that are connected to the hidden layers. The local module attached to each hidden layer has a single-layer structure and learns to mimic the final output of the network. Then, the weights of a local module behave like a direct path connecting each hidden layer to the network output, which has an inverse relationship to the direct feedback weights of DFA. We use this relationship to update the feedback weight of DFA. From the experimental investigation, we confirm that the proposed adaptive feedback weights improve the alignment of the error signal of DFA with that of BP. Furthermore, comparative experiments show that the proposed method significantly outperforms the original DFA on well-known benchmark datasets. The code used for the experiments is available at https://github.com/leibniz21c/direct-feedback-learning-with-local-alignment-support.
AB - While backpropagation (BP) algorithm has been pivotal in enabling the success of modern deep learning technologies, it encounters challenges related to computational inefficiency and biological implausibility. Especially, the sequential propagation of error signals using forward weights in BP is not biologically plausible and prevents efficient parallel updates of learning parameters. To solve these problems, the direct feedback alignment (DFA) method is proposed to directly propagate the error signal from output layer to each hidden layer through random feedback weight, but the performance of DFA is still not competent to BP, especially in complicate tasks with large number of outputs and the convolutional neural network models. In this paper, we propose a method to adjust the feedback weights in DFA using additional local modules that are connected to the hidden layers. The local module attached to each hidden layer has a single-layer structure and learns to mimic the final output of the network. Then, the weights of a local module behave like a direct path connecting each hidden layer to the network output, which has an inverse relationship to the direct feedback weights of DFA. We use this relationship to update the feedback weight of DFA. From the experimental investigation, we confirm that the proposed adaptive feedback weights improve the alignment of the error signal of DFA with that of BP. Furthermore, comparative experiments show that the proposed method significantly outperforms the original DFA on well-known benchmark datasets. The code used for the experiments is available at https://github.com/leibniz21c/direct-feedback-learning-with-local-alignment-support.
KW - Backpropagation
KW - biologically plausible learning
KW - direct feedback alignment
KW - local alignment support module
KW - random feedback weight
UR - http://www.scopus.com/inward/record.url?scp=85195421133&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3409819
DO - 10.1109/ACCESS.2024.3409819
M3 - Article
AN - SCOPUS:85195421133
SN - 2169-3536
VL - 12
SP - 81388
EP - 81397
JO - IEEE Access
JF - IEEE Access
ER -