Keyphrases
Distributed Deep Learning
35%
Deep Neural Network
31%
Communication Efficiency
23%
Multi-GPU
22%
Deep Learning Model
22%
Dynamic Voltage Scaling
21%
Dynamic Voltage Frequency Scaling
21%
Energy Consumption
21%
Merged-gradient
21%
Energy Efficient
19%
Stochastic Gradient Descent
18%
Energy Efficiency
16%
GPGPU
15%
Deep Learning
15%
SPGD Algorithm
14%
Synchronous SGD
14%
Inference Service
14%
Gradient Sparsification
14%
Matrix multiplication
14%
Transformer
14%
Performance Estimation
14%
Frequency Scaling
14%
Sparsification
13%
System Scalability
13%
Distributed Training
12%
Wait-free
11%
Backpropagation
11%
Communication Time
10%
Data Parallelism
10%
Energy Saving
10%
Large Animal Model
10%
Performance Efficiency
10%
Frequency Sets
10%
Top-k Sparsification
9%
Stereo Matching
9%
Communication Overhead
9%
Machine Learning Models
9%
Computing Resources
8%
Transformer Model
8%
Transformer-based Deep Learning
8%
NVIDIA
7%
Communication Complexity
7%
Decentralized Learning
7%
Scheduling with Deadlines
7%
Non-preemptive Tasks
7%
AI Accelerator
7%
Peer Selection
7%
Deep Learning Architectures
7%
AI Training
7%
Model Generalization Capability
7%
Low Bandwidth Networks
7%
Communication Optimization
7%
Distributed SGD
7%
Efficient Training
7%
Simultaneous Communication
7%
Disparity Estimation
7%
Tensor Fusion
7%
Efficient Communication
7%
Energy-efficient Inference
7%
Online Scheduling
7%
Unit Performance
7%
Half-precision
7%
Sparse Formats
7%
As-a-service
7%
Scene Flow
7%
Scheduling Scheme
7%
Dense Matrix
7%
Neural Architecture Search
7%
Quantitative Survey
7%
Convergence Analysis
7%
Precision Matrix
7%
Prediction Accuracy
7%
Sparse Matrices
7%
Deadline Constraint
7%
System Bottleneck
7%
Network Architecture
7%
Data Communication
7%
Tensor Cores
7%
NVIDIA GPU
7%
Energy-aware
7%
Energy Conservation
7%
Heterogeneous Cluster
7%
Potential Systems
7%
Graphics Processing Unit
7%
Global Memory
7%
Training Data
7%
Preemptive Scheduling
7%
Communication Tasks
6%
Optimization Problem
6%
Popular
6%
Tesla
5%
P100
5%
Scheduling Algorithm
5%
Optimization Techniques
5%
Allocation Scheduling
5%
Sparse Gradient
5%
Resource Allocation
5%
Power Model
5%
GPU Power
5%
Deep Neural Network Training
5%
Natural Language Processing
5%
Worker number
5%
Computer Science
Graphics Processing Unit
100%
Deep Learning Method
51%
Deep Neural Network
43%
Gradient Descent
35%
Deep Learning Model
24%
Energy Efficient
23%
Experimental Result
18%
Energy Efficiency
18%
Dynamic Voltage and Frequency Scaling
17%
Energy Consumption
17%
Estimation Performance
14%
Benchmarking
14%
Optimization Problem
12%
Machine Learning
11%
Learning System
11%
Communication Overhead
10%
Artificial Intelligence
10%
Communication Complexity
9%
Stereo Matching
9%
Memory Frequency
9%
task-scheduling
8%
Computing Resource
8%
Transformer-Based Deep Learning
8%
Neural Network Model
8%
Scheduling Scheme
7%
Computer Hardware
7%
Preemptive-Task
7%
Inference Process
7%
training algorithm
7%
Mathematical Convergence
7%
Computer Cluster
7%
Iteration Time
7%
Training Data
7%
Pipelining
7%
Matrix Multiply
7%
Matrix Multiplication
7%
Computational Power
7%
Communication Network
7%
Neural Architecture Search
7%
Network Architecture
7%
Network Bandwidth
7%
Data Communication
7%
Precision Matrix
7%
Heterogeneous Cluster
7%
Convergence Performance
6%
Scheduling Algorithm
6%
Natural Language Processing
5%
Optimization Technique
5%
Distributed System
5%