From 8f9b5b3ff5184225684a86140db157ed507750d8 Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Thu, 31 Oct 2024 15:48:27 -0400 Subject: [PATCH] Fixes recommended by Bravo --- contents/core/benchmarking/benchmarking.qmd | 8 ++++---- contents/core/ops/ops.qmd | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index 00ee74ed3..e0fe1ddb8 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -20,7 +20,7 @@ This chapter will provide an overview of popular ML benchmarks, best practices f * Understand the purpose and goals of benchmarking AI systems, including performance assessment, resource evaluation, validation, and more. -* Learn about key model benchmarks, metrics, and trends, including accuracy, fairness, complexity, perforamnce, and energy efficiency. +* Learn about key model benchmarks, metrics, and trends, including accuracy, fairness, complexity, performance, and energy efficiency. * Become familiar with the key components of an AI benchmark, including datasets, tasks, metrics, baselines, reproducibility rules, and more. @@ -323,7 +323,7 @@ It is important to carefully consider these factors when designing benchmarks to Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems. -* [MLPerf Training Benchmark](https://github.com/mlcommons/training)* +*[MLPerf Training Benchmark](https://github.com/mlcommons/training)* MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark [@mattson2020mlperf] focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning. @fig-perf-trend highlights the performance improvements in progressive versions of MLPerf Training benchmarks, which have all outpaced Moore's Law. Using standardized benchmarking trends enables us to rigorously showcase the rapid evolution of ML computing. @@ -335,7 +335,7 @@ Metrics: * Throughput (examples per second) * Resource utilization (CPU, GPU, memory, disk I/O) -* [DAWNBench](https://dawn.cs.stanford.edu/benchmark/)* +*[DAWNBench](https://dawn.cs.stanford.edu/benchmark/)* DAWNBench [@coleman2017dawnbench] is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering. @@ -345,7 +345,7 @@ Metrics: * Inference latency * Cost (in terms of cloud computing and storage resources) -* [Fathom](https://github.com/rdadolf/fathom)* +*[Fathom](https://github.com/rdadolf/fathom)* Fathom [@adolf2016fathom] is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling. diff --git a/contents/core/ops/ops.qmd b/contents/core/ops/ops.qmd index 8fe558761..c4d435168 100644 --- a/contents/core/ops/ops.qmd +++ b/contents/core/ops/ops.qmd @@ -494,7 +494,7 @@ Skilled project managers enable MLOps teams to work synergistically to rapidly d ## Embedded System Challenges -Building on our discussion of [On-device Learning](../optimizations/ondevice_learning.qmd) in the previous chapter, we now turn our attention to the broader context of embedded systems in MLOps. The unique constraints and requirements of embedded environments significantly impact the implementation of machine learning models and operations. To set the stage for the specific challenges that emerge with embedded MLOps, it is important to first review the general challenges associated with embedded systems. This overview will provide a foundation for understanding how these constraints intersect with and shape the practices of MLOps in resource-limited, edge computing scenarios. +Building on our discussion of [On-device Learning](../ondevice_learning/ondevice_learning.qmd) in the previous chapter, we now turn our attention to the broader context of embedded systems in MLOps. The unique constraints and requirements of embedded environments significantly impact the implementation of machine learning models and operations. To set the stage for the specific challenges that emerge with embedded MLOps, it is important to first review the general challenges associated with embedded systems. This overview will provide a foundation for understanding how these constraints intersect with and shape the practices of MLOps in resource-limited, edge computing scenarios. ### Limited Compute Resources