TY - JOUR
T1 - DRL-assisted task offloading in enhanced time-expanded graph (eTEG)-modeled aerial computing
AU - Mo, Jiang
AU - Zhao, Ke
AU - Peng, Limei
AU - Lee, Jiyeon
AU - Ma, Li
AU - Pu, Lixin
AU - Fan, Jipeng
N1 - Publisher Copyright:
© 2024
PY - 2024/12/1
Y1 - 2024/12/1
N2 - Space–air–ground integrated networks (SAGINs), categorized under aerial computing (AC), are emerging as a promising hierarchical platform designed to meet the seamless connectivity demands of the forthcoming 6G era. However, efficiently offloading ground tasks to space entities via SAGINs presents unprecedented challenges, primarily due to the mobility of these networks. In response, an enhanced time-expanded graph (eTEG) is proposed to model the dynamic distribution of heterogeneous SAGIN resources, including transmission bandwidth, computation, and storage, thereby optimizing task offloading and resource allocation by employing eTEG. Specifically, this optimization challenge is addressed using a deep reinforcement learning (DRL) approach, aimed at streamlining decision-making for task offloading and resource management to significantly reduce end-to-end delay and enhance network performance. Simulation experiments conducted to evaluate the proposed DRL-based method demonstrate its effectiveness in reducing energy consumption and improving stability, thereby outperforming other methods by achieving reduced delays and satisfying user requirements.
AB - Space–air–ground integrated networks (SAGINs), categorized under aerial computing (AC), are emerging as a promising hierarchical platform designed to meet the seamless connectivity demands of the forthcoming 6G era. However, efficiently offloading ground tasks to space entities via SAGINs presents unprecedented challenges, primarily due to the mobility of these networks. In response, an enhanced time-expanded graph (eTEG) is proposed to model the dynamic distribution of heterogeneous SAGIN resources, including transmission bandwidth, computation, and storage, thereby optimizing task offloading and resource allocation by employing eTEG. Specifically, this optimization challenge is addressed using a deep reinforcement learning (DRL) approach, aimed at streamlining decision-making for task offloading and resource management to significantly reduce end-to-end delay and enhance network performance. Simulation experiments conducted to evaluate the proposed DRL-based method demonstrate its effectiveness in reducing energy consumption and improving stability, thereby outperforming other methods by achieving reduced delays and satisfying user requirements.
KW - Data offloading
KW - Deep reinforcement learning
KW - Dynamic resource allocation
KW - Space–air–ground networks
KW - Time-expanded graph
UR - https://www.scopus.com/pages/publications/85204921367
U2 - 10.1016/j.comcom.2024.107954
DO - 10.1016/j.comcom.2024.107954
M3 - Article
AN - SCOPUS:85204921367
SN - 0140-3664
VL - 228
JO - Computer Communications
JF - Computer Communications
M1 - 107954
ER -