Abstract
In this work, embedded system working model is designed with one server that receives requests by requester through a queue, and that is controlled by a power manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian non stationary policies (DMNSP). We apply reinforcement learning which is a computational approach to understanding and automating goal-directed learning and decision-making to DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, actions, and rewards. The effectiveness of this approach is demonstrated by an event driven simulator which is designed using JAVA with a power-manageable embedded devices. Our experiment result shows that the novel dynamic power management with time out policies gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
Original language | English |
---|---|
Pages (from-to) | 2046-2049 |
Number of pages | 4 |
Journal | Advanced Science Letters |
Volume | 19 |
Issue number | 7 |
DOIs | |
State | Published - Jul 2013 |
Keywords
- Dynamic power management
- Embedded devices
- Ubiquitous systems