TY - JOUR
T1 - Energy-Efficient Dynamic Virtual Machine Management in Data Centers
AU - Han, Zhenhua
AU - Tan, Haisheng
AU - Wang, Rui
AU - Chen, Guihai
AU - LI, Yupeng
AU - Lau, Francis Chi Moon
N1 - This work was supported in part by the NSFC under Grant 61772489 and Grant 61502201, in part by the Distinguished Young Scientists under Grant 61625205, in part by the Hong Kong RGC CRF Grant under Grant C7036-15G, in part by the China 973 Project under Grant 2014CB340303, in part by the Key Research Program of Frontier Sciences (CAS) under Grant QYZDY-SSW-JSC002, in part by the Shenzhen Science and Technology Innovation Committee under Grant JCYJ20160331115457945, in part by the NSF under Grant ECCS1247944, and in part by NSF under Grant CNS 1526638.
PY - 2019/2
Y1 - 2019/2
N2 - Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristic and lack theoretical performance guarantees. In this paper, we formulate the dynamic VM management as a large-scale Markov decision process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, we show that MadVM can be implemented in a distributed system with at most two times of the optimal migration cost. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage, and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.
AB - Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristic and lack theoretical performance guarantees. In this paper, we formulate the dynamic VM management as a large-scale Markov decision process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, we show that MadVM can be implemented in a distributed system with at most two times of the optimal migration cost. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage, and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.
KW - Cloud computing
KW - resource management
KW - energy efficiency
KW - Markov decision process
UR - https://www.scopus.com/pages/publications/85061708369
U2 - 10.1109/TNET.2019.2891787
DO - 10.1109/TNET.2019.2891787
M3 - Journal article
SN - 1063-6692
VL - 27
SP - 344
EP - 360
JO - IEEE/ACM Transactions on Networking
JF - IEEE/ACM Transactions on Networking
IS - 1
ER -