Skip to content

Commit

Permalink
Chapter-12
Browse files Browse the repository at this point in the history
Chapter 12 has been edited.
  • Loading branch information
hzeljko committed Feb 4, 2025
1 parent f6b467d commit 54ce94f
Show file tree
Hide file tree
Showing 6 changed files with 17 additions and 18 deletions.
22 changes: 11 additions & 11 deletions Machine-Learning-Systems.log
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
This is pdfTeX, Version 3.141592653-2.6-1.40.26 (MiKTeX 24.4) (preloaded format=pdflatex 2024.12.26) 3 FEB 2025 17:39
This is pdfTeX, Version 3.141592653-2.6-1.40.26 (MiKTeX 24.4) (preloaded format=pdflatex 2024.12.26) 4 FEB 2025 11:08
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
Expand Down Expand Up @@ -554,7 +554,7 @@ Package caption Info: sidecap package is loaded.

Class scrbook Warning: Usage of package `fancyhdr' together
(scrbook) with a KOMA-Script class is not recommended.
(scrbook) I'd suggest to use
(scrbook) I'd suggest to use
(scrbook) package `scrlayer' or `scrlayer-scrpage', because
(scrbook) they support KOMA-Script classes.
(scrbook) With `fancyhdr' several features of class `scrbook'
Expand All @@ -577,7 +577,7 @@ Package: fancyhdr 2024/12/09 v4.5 Extensive control of page headers and footers
\f@nch@O@olf=\skip88
\f@nch@O@orf=\skip89
) (C:\Users\Zeljko\AppData\Local\Programs\MiKTeX\tex/latex/fontspec\fontspec.sty (C:\Users\Zeljko\AppData\Local\Programs\MiKTeX\tex/latex/l3packages/xparse\xparse.sty (C:\Users\Zeljko\AppData\Local\Programs\MiKTeX\tex/latex/l3kernel\expl3.sty
Package: expl3 2024-11-02 L3 programming layer (loader)
Package: expl3 2024-11-02 L3 programming layer (loader)
(C:\Users\Zeljko\AppData\Local\Programs\MiKTeX\tex/latex/l3backend\l3backend-pdftex.def
File: l3backend-pdftex.def 2024-05-08 L3 backend support: PDF output (pdfTeX)
\l__color_backend_stack_int=\count310
Expand All @@ -590,16 +590,16 @@ Package: fontspec 2024/05/11 v2.9e Font selection for XeLaTeX and LuaLaTeX

! Fatal Package fontspec Error: The fontspec package requires either XeTeX or
(fontspec) LuaTeX.
(fontspec)
(fontspec)
(fontspec) You must change your typesetting engine to,
(fontspec) e.g., "xelatex" or "lualatex" instead of
(fontspec) "latex" or "pdflatex".

Type <return> to continue.
...

...
l.101 \msg_fatal:nn {fontspec} {cannot-use-pdftex}


LaTeX does not know anything more about this error, sorry.

Expand All @@ -611,13 +611,13 @@ This is a fatal error: LaTeX will abort.


! Emergency stop.
<read *>

<read *>
l.101 \msg_fatal:nn {fontspec} {cannot-use-pdftex}

*** (cannot \read from terminal in nonstop modes)


Here is how much of TeX's memory you used:
13525 strings out of 473650
252458 string characters out of 5717452
Expand Down
8 changes: 3 additions & 5 deletions contents/core/benchmarking/benchmarking.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ The complexity and diversity of these tasks play a crucial role in the benchmark

As machine learning advances, benchmark tasks must evolve to keep pace with emerging capabilities and challenges, ensuring they remain relevant and informative.

### Evaluation Metrics
### Evaluation Metrics>

Evaluation metrics are quantitative measures used to assess the performance of machine learning models on specific tasks. These metrics provide objective standards for comparing different models and approaches, enabling researchers and practitioners to gauge the effectiveness of their solutions. The selection of appropriate evaluation metrics is a critical aspect of benchmark design. Metrics must align closely with the task objectives and provide meaningful insights into model performance. For classification tasks, common metrics include accuracy, precision, recall, and F1 score [@sokolova2009systematic]. These metrics offer different perspectives on a model's ability to correctly identify and categorize data points.

Expand Down Expand Up @@ -296,6 +296,7 @@ Micro-benchmarks also examine activation functions and neural network layers in
:::{#exr-cuda .callout-caution collapse="true" title="Benchmarking Tensor Operations"}

Ever wonder how your image filters get so fast? Special libraries like cuDNN supercharge those calculations on certain hardware. In this Colab, we'll use cuDNN with PyTorch to speed up image filtering. Think of it as a tiny benchmark, showing how the right software can unlock your GPU's power!
\vspace{1pt}

[![](https://colab.research.google.com/assets/colab-badge.png)](https://colab.research.google.com/github/RyanHartzell/cudnn-image-filtering/blob/master/notebooks/CuDNN%20Image%20Filtering%20Tutorial%20Using%20PyTorch.ipynb#scrollTo=1sWeXdYsATrr)

Expand Down Expand Up @@ -432,19 +433,16 @@ Training benchmarks, such as MLPerf Training, define specific accuracy targets f
#### Training Time and Throughput

One of the fundamental metrics for evaluating training efficiency is the time required to reach a predefined accuracy threshold. Training time ($T_{\text{train}}$) measures how long a model takes to converge to an acceptable performance level, reflecting the overall computational efficiency of the system. It is formally defined as:

$$
T_{\text{train}} = \arg\min_{t} \{ \text{accuracy}(t) \geq \text{target accuracy} \}
$$

This metric ensures that benchmarking focuses on how quickly and effectively a system can achieve meaningful results.

Throughput, often expressed as the number of training samples processed per second, provides an additional measure of system performance:

$$
T = \frac{N_{\text{samples}}}{T_{\text{train}}}
$$

where $N_{\text{samples}}$ is the total number of training samples processed. However, throughput alone does not guarantee meaningful results, as a model may process a large number of samples quickly without necessarily reaching the desired accuracy.

For example, in MLPerf Training, the benchmark for ResNet-50 may require reaching an accuracy target like 75.9% top-1 on the ImageNet dataset. A system that processes 10,000 images per second but fails to achieve this accuracy is not considered a valid benchmark result, while a system that processes fewer images per second but converges efficiently is preferable. This highlights why throughput must always be evaluated in relation to time-to-accuracy rather than as an independent performance measure.
Expand Down Expand Up @@ -912,7 +910,7 @@ The future of benchmarking lies in an integrated approach that evaluates how sys

As AI continues to evolve, benchmarking must evolve with it. Understanding AI performance requires evaluating systems, models, and data together, ensuring that benchmarks drive not just higher accuracy, but also efficiency, fairness, and robustness. This holistic perspective will be critical for building AI that is not only powerful, but also practical and ethical.

![Benchmarking trifecta.](images/png/benchmarking_trifecta.png){#fig-benchmarking-trifecta}
![Benchmarking trifecta.](images/png/benchmarking_trifecta.png){#fig-benchmarking-trifecta width=65%}

## Conclusion

Expand Down
Binary file modified contents/core/benchmarking/images/png/end2end.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified contents/core/benchmarking/images/png/hardware_lottery.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 3 additions & 2 deletions tex/header-includes.tex
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,7 @@
{0.5em}
{}
[\textbf{.}]
\titlespacing*{\paragraph}{0pc}{6pt plus 2pt minus 2pt}{0.5em}[0pc]

% Redefine \subparagraph (if you want to apply the Crimson color here)
\titleformat{\subparagraph}[runin]
Expand All @@ -205,7 +206,7 @@
{0.5em}
{}
[\textbf{.}]

\titlespacing*{\subparagraph}{0pc}{6pt plus 2pt minus 2pt}{0.5em}[0pc]
% Customize Chapter title format
\titleformat{\chapter}[display]
{\normalfont\huge\bfseries\color{crimson}} % Apply the crimson color
Expand Down Expand Up @@ -270,4 +271,4 @@
}
\makeatother

\AtBeginEnvironment{longtable}{\scriptsize} % Adjust to \footnotesize or \scriptsize if needed
\AtBeginEnvironment{longtable}{\scriptsize} % Adjust to \footnotesize or \scriptsize if needed

0 comments on commit 54ce94f

Please sign in to comment.