Abstract
With the increasing installation of Graphics Processing Units (GPUs) in supercomputers and data centers, their huge electricity cost brings new environmental and economic concerns. Although Dynamic Voltage and Frequency Scaling (DVFS) techniques have been successfully applied on traditional CPUs to reserve energy, the impact of GPU DVFS on application performance and power consumption is not yet fully understood, mainly due to the complicated GPU memory system. This paper proposes a fast prediction model based on Support Vector Regression (SVR), which can estimate the average runtime power of a given GPU kernel using a set of profiling parameters under different GPU core and memory frequencies. Our experimental data set includes 931 samples obtained from 19 GPU kernels running on a real GPU platform with the core and memory frequencies ranging between 400MHz and 1000MHz. We evaluate the accuracy of the SVR-based prediction model by ten-fold cross validation. We achieve greater accuracy than prior models, being Mean Square Error (MSE) of 0.797 Watt and Mean Absolute Percentage Error (MAPE) of 3.08% on average. Combined with an existing performance prediction model, we can find the optimal GPU frequency settings that can save an average of 13.2% energy across those GPU kernels with no more than 10% performance penalty compared to applying the default setting.
Original language | English |
---|---|
Pages (from-to) | 73-78 |
Number of pages | 6 |
Journal | Performance Evaluation Review |
Volume | 45 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Sept 2017 |
Event | Workshop on MAthematical Performance Modeling and Analysis, MAMA 2017, 2017 Greenmetrics Workshop and Workshop on Critical Infrastructure Network Security, CINS 2017 - Urbana-Champaign, United States Duration: 1 Jun 2017 → … |
Scopus Subject Areas
- Software
- Hardware and Architecture
- Computer Networks and Communications