Keyphrases
As-a-service
33%
Batch Inference
33%
Cloud Service Provider
16%
Deep Learning Model
16%
Energy Consumption
16%
Energy Consumption Model
16%
Energy Efficiency
66%
Energy Efficient
100%
Energy Scheduling
16%
Essential Performance
16%
Full Scope
16%
High Complexity
16%
Inference Performance
33%
Inference Service
100%
Large Animal Model
16%
Latency
16%
Latency Constraint
16%
Load Balancing
16%
Model Complexity
16%
Model Size
16%
Natural Language Processing
16%
NVIDIA GPU
16%
Online Inference
16%
Online Scheduling
100%
Payload Capacity
16%
Performance Efficiency
16%
Performance Metrics
16%
Processing Task
16%
Resource-constrained
16%
Robust Scheduling
16%
Scheduling Policy
16%
Scheduling Scheme
66%
Service Level Agreement
16%
Service Provider
16%
Throughput Efficiency
16%
Transformer
100%
Transformer Model
16%
Transformer-based Deep Learning
16%
V100
16%
Computer Science
Cloud Service Provider
14%
Deep Learning Model
14%
Energy Consumption
14%
Energy Consumption Model
14%
Energy Efficiency
57%
Energy Efficient
100%
Graphics Processing Unit
100%
Natural Language Processing
14%
Performance Metric
14%
Processing Task
14%
Scheduling Policy
14%
Scheduling Scheme
57%
Service Application
14%
Service Provider
14%
Service-Level Agreement
14%
Transformer-Based Deep Learning
14%
Workload Balancing
14%