TY - GEN
T1 - Dynamic power management for embedded ubiquitous systems
AU - Paul, Anand
AU - Chen, Bo Wei
AU - Jeong, J.
AU - Wang, Jhing Fa
PY - 2013
Y1 - 2013
N2 - In this work, embedded system working model is designed with one server that receives requests by requester through a queue, and that is controlled by a power manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian non stationary policies (DMNSP). We apply reinforcement learning which is a computational approach to understanding and automating goal-directed learning and decision-making to DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, actions, and rewards. The effectiveness of this approach is demonstrated by an event driven simulator which is designed using JAVA with a power-manageable embedded devices. Our experiment result shows that the novel dynamic power management with time out policies gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
AB - In this work, embedded system working model is designed with one server that receives requests by requester through a queue, and that is controlled by a power manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian non stationary policies (DMNSP). We apply reinforcement learning which is a computational approach to understanding and automating goal-directed learning and decision-making to DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, actions, and rewards. The effectiveness of this approach is demonstrated by an event driven simulator which is designed using JAVA with a power-manageable embedded devices. Our experiment result shows that the novel dynamic power management with time out policies gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
KW - Dynamic Power Management
KW - Embedded systems
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=84879855786&partnerID=8YFLogxK
U2 - 10.1109/ICOT.2013.6521159
DO - 10.1109/ICOT.2013.6521159
M3 - Conference contribution
AN - SCOPUS:84879855786
SN - 9781467359368
T3 - ICOT 2013 - 1st International Conference on Orange Technologies
SP - 67
EP - 71
BT - ICOT 2013 - 1st International Conference on Orange Technologies
T2 - 1st International Conference on Orange Technologies, ICOT 2013
Y2 - 12 March 2013 through 16 March 2013
ER -