Dynamic power management for ubiquitous network devices

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

In this work, embedded system working model is designed with one server that receives requests by requester through a queue, and that is controlled by a power manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian non stationary policies (DMNSP). We apply reinforcement learning which is a computational approach to understanding and automating goal-directed learning and decision-making to DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, actions, and rewards. The effectiveness of this approach is demonstrated by an event driven simulator which is designed using JAVA with a power-manageable embedded devices. Our experiment result shows that the novel dynamic power management with time out policies gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.

Original languageEnglish
Pages (from-to)2046-2049
Number of pages4
JournalAdvanced Science Letters
Volume19
Issue number7
DOIs
StatePublished - Jul 2013

Keywords

  • Dynamic power management
  • Embedded devices
  • Ubiquitous systems

Fingerprint

Dive into the research topics of 'Dynamic power management for ubiquitous network devices'. Together they form a unique fingerprint.

Cite this