A Low-Complexity Algorithm for a Reinforcement Learning-Based Channel Estimator for MIMO Systems

Tae Kyoung Kim, Moonsik Min

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This paper proposes a low-complexity algorithm for a reinforcement learning-based channel estimator for multiple-input multiple-output systems. The proposed channel estimator utilizes detected symbols to reduce the channel estimation error. However, the detected data symbols may include errors at the receiver owing to the characteristics of the wireless channels. Thus, the detected data symbols are selectively used as additional pilot symbols. To this end, a Markov decision process (MDP) problem is defined to optimize the selection of the detected data symbols. Subsequently, a reinforcement learning algorithm is developed to solve the MDP problem with computational efficiency. The developed algorithm derives the optimal policy in a closed form by introducing backup samples and data subblocks, to reduce latency and complexity. Simulations are conducted, and the results show that the proposed channel estimator significantly reduces the minimum-mean square error of the channel estimates, thus improving the block error rate compared to the conventional channel estimation.

Original languageEnglish
Article number4379
JournalSensors
Volume22
Issue number12
DOIs
StatePublished - 1 Jun 2022

Keywords

  • channel estimation
  • Markov decision process
  • multiple-input multiple-output
  • reinforcement learning

Fingerprint

Dive into the research topics of 'A Low-Complexity Algorithm for a Reinforcement Learning-Based Channel Estimator for MIMO Systems'. Together they form a unique fingerprint.

Cite this