TY - JOUR
T1 - Dynamic personalized thermal comfort Model:Integrating temporal dynamics and environmental variability with individual preferences
AU - Abdulraheem, Abdulkabir
AU - Lee, Seungho
AU - Jung, Im Y.
N1 - Publisher Copyright:
© 2025 The Authors
PY - 2025/5/15
Y1 - 2025/5/15
N2 - Understanding human thermal perception is essential for creating comfortable and energy-efficient indoor environments. In this study, we introduce a dynamic deep learning framework, Thermal Comfort Prediction Model using Long Short-Term Memory (TCPM-LSTM) networks, with Reinforcement Learning (RL) to model and predict personalized thermal comfort under varying environmental conditions. Our proposed Personalized Comfort Model with Reinforcement Learning (PCM-RL) captures temporal dynamics and individual differences in thermal sensation, comfort, and preference. PCM-RL shows about a 13.6 % improvement in average reward when using RL with a pre-trained LSTM (TCPM-LSTM) compared to RL without LSTM. This integrated approach allows the RL agent to make more informed decisions, optimizing comfort based on real-time predictions. Moreover, our framework demonstrates more stable learning behavior, with reduced reward variability across episodes, making it a robust tool for personalized comfort management. This study represents a significant step forward in developing intelligent, adaptive systems that optimize human-centric thermal comfort by providing actionable insights for managing indoor environments effectively.
AB - Understanding human thermal perception is essential for creating comfortable and energy-efficient indoor environments. In this study, we introduce a dynamic deep learning framework, Thermal Comfort Prediction Model using Long Short-Term Memory (TCPM-LSTM) networks, with Reinforcement Learning (RL) to model and predict personalized thermal comfort under varying environmental conditions. Our proposed Personalized Comfort Model with Reinforcement Learning (PCM-RL) captures temporal dynamics and individual differences in thermal sensation, comfort, and preference. PCM-RL shows about a 13.6 % improvement in average reward when using RL with a pre-trained LSTM (TCPM-LSTM) compared to RL without LSTM. This integrated approach allows the RL agent to make more informed decisions, optimizing comfort based on real-time predictions. Moreover, our framework demonstrates more stable learning behavior, with reduced reward variability across episodes, making it a robust tool for personalized comfort management. This study represents a significant step forward in developing intelligent, adaptive systems that optimize human-centric thermal comfort by providing actionable insights for managing indoor environments effectively.
KW - Environmental dynamics
KW - Personalized thermal comfort models
KW - Reinforcement learning
KW - TCPM-LSTM networks
KW - Thermal comfort
KW - Thermal perception
UR - https://www.scopus.com/pages/publications/85216891583
U2 - 10.1016/j.jobe.2025.111938
DO - 10.1016/j.jobe.2025.111938
M3 - Article
AN - SCOPUS:85216891583
SN - 2352-7102
VL - 102
JO - Journal of Building Engineering
JF - Journal of Building Engineering
M1 - 111938
ER -