TY - JOUR
T1 - Knowledge Distillation-Aided End-to-End Learning for Linear Precoding in Multiuser MIMO Downlink Systems with Finite-Rate Feedback
AU - Kong, Kyeongbo
AU - Song, Woo Jin
AU - Min, Moonsik
N1 - Publisher Copyright:
© 1967-2012 IEEE.
PY - 2021/10/1
Y1 - 2021/10/1
N2 - We propose a deep learning-based channel estimation, quantization, feedback, and precoding method for downlink multiuser multiple-input and multiple-output systems. In the proposed system, channel estimation and quantization for limited feedback are handled by a receiver deep neural network (DNN). Precoder selection is handled by a transmitter DNN. To emulate the traditional channel quantization, a binarization layer is adopted at each receiver DNN, and the binarization layer is also used to enable end-to-end learning. However, this can lead to inaccurate gradients, which can trap the receiver DNNs at a poor local minimum during training. To address this, we consider knowledge distillation, in which the existing DNNs are jointly trained with an auxiliary transmitter DNN. The use of an auxiliary DNN as a teacher network allows the receiver DNNs to additionally exploit lossless gradients, which is useful in avoiding a poor local minimum. For the same number of feedback bits, our DNN-based precoding scheme can achieve a higher downlink rate compared to conventional linear precoding with codebook-based limited feedback.
AB - We propose a deep learning-based channel estimation, quantization, feedback, and precoding method for downlink multiuser multiple-input and multiple-output systems. In the proposed system, channel estimation and quantization for limited feedback are handled by a receiver deep neural network (DNN). Precoder selection is handled by a transmitter DNN. To emulate the traditional channel quantization, a binarization layer is adopted at each receiver DNN, and the binarization layer is also used to enable end-to-end learning. However, this can lead to inaccurate gradients, which can trap the receiver DNNs at a poor local minimum during training. To address this, we consider knowledge distillation, in which the existing DNNs are jointly trained with an auxiliary transmitter DNN. The use of an auxiliary DNN as a teacher network allows the receiver DNNs to additionally exploit lossless gradients, which is useful in avoiding a poor local minimum. For the same number of feedback bits, our DNN-based precoding scheme can achieve a higher downlink rate compared to conventional linear precoding with codebook-based limited feedback.
KW - Deep learning
KW - limited feedback
KW - linear precoding
KW - multiple-input multiple-output
KW - spatial multiplexing
UR - http://www.scopus.com/inward/record.url?scp=85114733944&partnerID=8YFLogxK
U2 - 10.1109/TVT.2021.3110608
DO - 10.1109/TVT.2021.3110608
M3 - Article
AN - SCOPUS:85114733944
SN - 0018-9545
VL - 70
SP - 11095
EP - 11100
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 10
ER -