TY - GEN
T1 - Parameter Reduction for Deep Neural Network Based Acoustic Models Using Sparsity Regularized Factorization Neurons
AU - Chung, Hoon
AU - Chung, Euisok
AU - Park, Jeon Gue
AU - Jung, Ho Young
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - In this paper, we propose a deep neural network (DNN) model parameter reduction technique for an efficient acoustic model. One of the most common DNN model parameter reduction techniques is to use low-rank matrix approximation. Although it can reduce a significant number of model parameters, there are two problems to be considered; one is the performance degradation, and the other is the appropriate rank selection. To solve these problems, retraining is carried out, and so-called explained variance is used. However, retraining takes additional time, and explained variance is not directly related to classification performance.Therefore, to mitigate these problems, we propose an approach that performs model parameter reduction simultaneously during model training from the aspect of minimizing classification error. The proposed method uses the product of three factorized matrices instead of a dense weight matrix, and applies sparsity constraint to make entries of the center diagonal matrix zero. After finishing training, a parameter-reduced model can be obtained by discarding the left and right vectors corresponding to zero entries within the center diagonal matrix.
AB - In this paper, we propose a deep neural network (DNN) model parameter reduction technique for an efficient acoustic model. One of the most common DNN model parameter reduction techniques is to use low-rank matrix approximation. Although it can reduce a significant number of model parameters, there are two problems to be considered; one is the performance degradation, and the other is the appropriate rank selection. To solve these problems, retraining is carried out, and so-called explained variance is used. However, retraining takes additional time, and explained variance is not directly related to classification performance.Therefore, to mitigate these problems, we propose an approach that performs model parameter reduction simultaneously during model training from the aspect of minimizing classification error. The proposed method uses the product of three factorized matrices instead of a dense weight matrix, and applies sparsity constraint to make entries of the center diagonal matrix zero. After finishing training, a parameter-reduced model can be obtained by discarding the left and right vectors corresponding to zero entries within the center diagonal matrix.
KW - deep neural network
KW - matrix factorization
KW - Speech recognition
UR - http://www.scopus.com/inward/record.url?scp=85073238111&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2019.8852021
DO - 10.1109/IJCNN.2019.8852021
M3 - Conference contribution
AN - SCOPUS:85073238111
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2019 International Joint Conference on Neural Networks, IJCNN 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Joint Conference on Neural Networks, IJCNN 2019
Y2 - 14 July 2019 through 19 July 2019
ER -