Skip to content

Commit

Permalink
Fixes recommended by Bravo
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Oct 31, 2024
1 parent 3a91763 commit 8f9b5b3
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions contents/core/benchmarking/benchmarking.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This chapter will provide an overview of popular ML benchmarks, best practices f

* Understand the purpose and goals of benchmarking AI systems, including performance assessment, resource evaluation, validation, and more.

* Learn about key model benchmarks, metrics, and trends, including accuracy, fairness, complexity, perforamnce, and energy efficiency.
* Learn about key model benchmarks, metrics, and trends, including accuracy, fairness, complexity, performance, and energy efficiency.

* Become familiar with the key components of an AI benchmark, including datasets, tasks, metrics, baselines, reproducibility rules, and more.

Expand Down Expand Up @@ -323,7 +323,7 @@ It is important to carefully consider these factors when designing benchmarks to

Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems.

* [MLPerf Training Benchmark](https://github.com/mlcommons/training)*
*[MLPerf Training Benchmark](https://github.com/mlcommons/training)*

MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark [@mattson2020mlperf] focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning. @fig-perf-trend highlights the performance improvements in progressive versions of MLPerf Training benchmarks, which have all outpaced Moore's Law. Using standardized benchmarking trends enables us to rigorously showcase the rapid evolution of ML computing.

Expand All @@ -335,7 +335,7 @@ Metrics:
* Throughput (examples per second)
* Resource utilization (CPU, GPU, memory, disk I/O)

* [DAWNBench](https://dawn.cs.stanford.edu/benchmark/)*
*[DAWNBench](https://dawn.cs.stanford.edu/benchmark/)*

DAWNBench [@coleman2017dawnbench] is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering.

Expand All @@ -345,7 +345,7 @@ Metrics:
* Inference latency
* Cost (in terms of cloud computing and storage resources)

* [Fathom](https://github.com/rdadolf/fathom)*
*[Fathom](https://github.com/rdadolf/fathom)*

Fathom [@adolf2016fathom] is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling.

Expand Down
2 changes: 1 addition & 1 deletion contents/core/ops/ops.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,7 @@ Skilled project managers enable MLOps teams to work synergistically to rapidly d

## Embedded System Challenges

Building on our discussion of [On-device Learning](../optimizations/ondevice_learning.qmd) in the previous chapter, we now turn our attention to the broader context of embedded systems in MLOps. The unique constraints and requirements of embedded environments significantly impact the implementation of machine learning models and operations. To set the stage for the specific challenges that emerge with embedded MLOps, it is important to first review the general challenges associated with embedded systems. This overview will provide a foundation for understanding how these constraints intersect with and shape the practices of MLOps in resource-limited, edge computing scenarios.
Building on our discussion of [On-device Learning](../ondevice_learning/ondevice_learning.qmd) in the previous chapter, we now turn our attention to the broader context of embedded systems in MLOps. The unique constraints and requirements of embedded environments significantly impact the implementation of machine learning models and operations. To set the stage for the specific challenges that emerge with embedded MLOps, it is important to first review the general challenges associated with embedded systems. This overview will provide a foundation for understanding how these constraints intersect with and shape the practices of MLOps in resource-limited, edge computing scenarios.

### Limited Compute Resources

Expand Down

0 comments on commit 8f9b5b3

Please sign in to comment.