ML Perf Training Benchmark [paper] - paper : "MLPERF TRAINING BENCHMARK " )
MLPerf is a benchmark suite that is used to evaluate training and inference performance of on-premises and cloud platforms. MLPerf is intended as an independent, objective performance yardstick for software frameworks, hardware platforms, and cloud platforms for machine learning. A consortium of AI community researchers and developers from more than 30 organizations developed and continue to evolve these benchmarks. The goal of MLPerf is to give developers a way to evaluate hardware architectures and the wide range of advancing machine learning frameworks.
-
Benchmark Categories
: MLPerf Training Benchmarks cover a variety of machine learning tasks that represent common and critical workloads in the field. These tasks are selected to provide a comprehensive evaluation of system performance across different domains, including:Image Classification
: Benchmarks like ResNet-50 measure how fast a system can train models on large image datasets (e.g., ImageNet).Object Detection
: Models like SSD (Single Shot Multibox Detector) and Mask R-CNN are used to assess the performance of systems on object detection tasks.Natural Language Processing (NLP)
: Benchmarks include tasks like BERT (Bidirectional Encoder Representations from Transformers) for language understanding and translation.Reinforcement Learning
: Tasks such as MiniGo, based on the game of Go, are used to evaluate reinforcement learning performance.Recommendation Systems
: The Deep Learning Recommendation Model (DLRM) benchmark measures performance on tasks related to recommendation engines, common in industry applications.Speech Recognition
: The RNN-T (Recurrent Neural Network Transducer) benchmark tests systems on speech-to-text tasks.
MLPerf Submission Categories:
results - training v2.1 | code, NVIDIA MLPerf Benchmarks | Demystifying the MLPerf Training Benchmark Suite 🌸