TY - JOUR
T1 - Real-time power management for embedded m2m using intelligent learning methods
AU - Paul, Anand
N1 - Publisher Copyright:
© 2014 Copyright held by the Owner/Author. Publication rights licensed to ACM.
PY - 2014/10/6
Y1 - 2014/10/6
N2 - In this work, an embedded system working model is designed with one server that receives requests by a requester by a service queue that is monitored by a Power Manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian nonstationary policies (DMNSP). We apply reinforcement learning, namely a computational approach to understanding and automating goal-directed learning that supports different devices according to their DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, response action, and reward points. The capability of this approach is demonstrated by an event-driven simulator designed using Java with a power-manageable machine-tomachine device. Our experiment result shows that the proposed dynamic power management with timeout policy gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
AB - In this work, an embedded system working model is designed with one server that receives requests by a requester by a service queue that is monitored by a Power Manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian nonstationary policies (DMNSP). We apply reinforcement learning, namely a computational approach to understanding and automating goal-directed learning that supports different devices according to their DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, response action, and reward points. The capability of this approach is demonstrated by an event-driven simulator designed using Java with a power-manageable machine-tomachine device. Our experiment result shows that the proposed dynamic power management with timeout policy gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
KW - Dynamic power management
KW - Intelligent reinforcement and indexing
UR - http://www.scopus.com/inward/record.url?scp=84908211309&partnerID=8YFLogxK
U2 - 10.1145/2632158
DO - 10.1145/2632158
M3 - Article
AN - SCOPUS:84908211309
SN - 1539-9087
VL - 13
JO - Transactions on Embedded Computing Systems
JF - Transactions on Embedded Computing Systems
M1 - 148
ER -