TY - JOUR
T1 - Deep reinforcement learning for PID parameter tuning in greenhouse HVAC system energy Optimization
T2 - A TRNSYS-Python cosimulation approach
AU - Adesanya, Misbaudeen Aderemi
AU - Obasekore, Hammed
AU - Rabiu, Anis
AU - Na, Wook Ho
AU - Ogunlowo, Qazeem Opeyemi
AU - Akpenpuun, Timothy Denen
AU - Kim, Min Hwi
AU - Kim, Hyeon Tae
AU - Kang, Bo Yeong
AU - Lee, Hyun Woo
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/10/15
Y1 - 2024/10/15
N2 - The control of indoor temperature in greenhouses is crucial as it directly impacts the crop's thermal comfort and the performance of heating, ventilation, and air-conditioning (HVAC) systems. Conventional feedback controllers, like on/off, can sometimes make HVAC system work at full capacity when only half that capacity is needed. In contrast, the proportional-integral-derivative (PID) controller, provides precise control based on its P, I, and D parameters. However, it lacks a formal design procedure for optimizing a specified objective function. Previous studies have utilized conventional PID tuning approaches to track room setpoint temperature for residential buildings, data centers, and office buildings, with limited research in greenhouse applications. To address this gap, this study proposes a flexible PID controller that employs a deep reinforcement learning (DRL) algorithm to optimize its parameters, by tracking the setpoints and energy consumption of a greenhouse planted with tomatoes. This approach is different from the typical method of using the trained RL agent directly in HVAC controls. Through a self-made TRNSYS-Python cosimulation framework, the DRL agent interacts directly and in real time with the greenhouse and its plants. Consequently, optimized PID parameters were established and tested in the simulated environment. The resulting performance, in terms of both energy consumption and its ability to maintain the crop's comfort temperature, was compared with the simulated on/off and manually tuned PID controllers. Compared to the on/off baseline control, the proposed PID optimized parameters reduce energy use by 8.81% to 12.99% and the manually tuned PID parameters with the Ziegler-Nichols tuning method reduce energy use by 7.17 %. Additionally, the proposed method had a deviation of 2.07% to 3.13%, while the manually tuned PID controller and the on/off controller had deviations of 7.27% and 3.27%, respectively, from the minimum comfortable temperature. This study serves as a framework for improving the energy efficiency of greenhouse HVAC system operations.
AB - The control of indoor temperature in greenhouses is crucial as it directly impacts the crop's thermal comfort and the performance of heating, ventilation, and air-conditioning (HVAC) systems. Conventional feedback controllers, like on/off, can sometimes make HVAC system work at full capacity when only half that capacity is needed. In contrast, the proportional-integral-derivative (PID) controller, provides precise control based on its P, I, and D parameters. However, it lacks a formal design procedure for optimizing a specified objective function. Previous studies have utilized conventional PID tuning approaches to track room setpoint temperature for residential buildings, data centers, and office buildings, with limited research in greenhouse applications. To address this gap, this study proposes a flexible PID controller that employs a deep reinforcement learning (DRL) algorithm to optimize its parameters, by tracking the setpoints and energy consumption of a greenhouse planted with tomatoes. This approach is different from the typical method of using the trained RL agent directly in HVAC controls. Through a self-made TRNSYS-Python cosimulation framework, the DRL agent interacts directly and in real time with the greenhouse and its plants. Consequently, optimized PID parameters were established and tested in the simulated environment. The resulting performance, in terms of both energy consumption and its ability to maintain the crop's comfort temperature, was compared with the simulated on/off and manually tuned PID controllers. Compared to the on/off baseline control, the proposed PID optimized parameters reduce energy use by 8.81% to 12.99% and the manually tuned PID parameters with the Ziegler-Nichols tuning method reduce energy use by 7.17 %. Additionally, the proposed method had a deviation of 2.07% to 3.13%, while the manually tuned PID controller and the on/off controller had deviations of 7.27% and 3.27%, respectively, from the minimum comfortable temperature. This study serves as a framework for improving the energy efficiency of greenhouse HVAC system operations.
KW - Cosimulation
KW - Deep reinforcement learning
KW - HVAC control
KW - Optimization
KW - Python
KW - TRNSYS
UR - http://www.scopus.com/inward/record.url?scp=85192070961&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2024.124126
DO - 10.1016/j.eswa.2024.124126
M3 - Article
AN - SCOPUS:85192070961
SN - 0957-4174
VL - 252
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 124126
ER -