On energy-aware allocation and execution for batch and interactive MapReduce

Yijun Ying, Robert Birke, Cheng Wang, Lydia Y. Chen, Gautam Natarajan

Research output: Contribution to journalArticlepeer-review


The energy-performance optimization of datacenters becomes ever challenging, due to heterogeneous workloads featuring different performance constraints. In addition to conventional web service, MapReduce presents another important workload class, whose performance highly depends on data availability/locality and shows different degrees of delay sensitivities, such as batch vs. interactive MapReduce. However, current energy optimization solutions are mainly designed for a subset of these workloads and their key features. Here, we present an energy minimization framework, in particular, a concave minimization problem, that specifically considers time variability, data locality, and delay sensitivity of web, batch-, and interactive-MapReduce. We aim to maximize the usage of MapReduce servers by using their spare capacity to run non-MapReduce workloads, while controlling the workload delays through the execution of MapReduce tasks, in particular batch ones. We develop an optimal algorithm with complexity O(T2) in case of perfect workload information, T being the length of the time horizon in number of control windows, and derive the structure of optimal policy for the case of uncertain workload information. Using extensive simulation results, we show that the proposed methodology can efficiently minimize the datacenter energy cost while fulfilling the delay constraints of workloads.

Original languageEnglish (US)
Pages (from-to)22-30
Number of pages9
JournalPerformance Evaluation Review
Issue number4
StatePublished - Mar 2015
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications


Dive into the research topics of 'On energy-aware allocation and execution for batch and interactive MapReduce'. Together they form a unique fingerprint.

Cite this