Skip to content

Commit

Permalink
Fix broken links cause of formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Jul 9, 2024
1 parent 500beb3 commit 6e333a0
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions contents/frameworks/frameworks.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ This chapter, we will explore today's leading cloud frameworks and how they have

Machine learning frameworks have evolved significantly to meet the diverse needs of machine learning practitioners and advancements in AI techniques. A few decades ago, building and training machine learning models required extensive low-level coding and infrastructure. Machine learning frameworks have evolved considerably over the past decade to meet the expanding needs of practitioners and rapid advances in deep learning techniques. Insufficient data and computing power constrained early neural network research. Building and training machine learning models required extensive low-level coding and infrastructure. However, the release of large datasets like [ImageNet](https://www.image-net.org/) [@deng2009imagenet] and advancements in parallel GPU computing unlocked the potential for far deeper neural networks.

The first ML frameworks, [Theano](<https://pypi.org/project/Theano/#:~:text=Theano> is a Python library, a similar interface to NumPy's.) by @al2016theano and [Caffe](https://caffe.berkeleyvision.org/) by @jia2014caffe, were developed by academic institutions (Montreal Institute for Learning Algorithms, Berkeley Vision and Learning Center). Amid growing interest in deep learning due to state-of-the-art performance of AlexNet @krizhevsky2012imagenet on the ImageNet dataset, private companies and individuals began developing ML frameworks, resulting in frameworks such as [Keras](https://keras.io/) by @chollet2018keras, [Chainer](https://chainer.org/) by @tokui2015chainer, TensorFlow from Google [@abadi2016tensorflow], [CNTK](https://learn.microsoft.com/en-us/cognitive-toolkit/) by Microsoft [@seide2016cntk], and PyTorch by Facebook [@paszke2019pytorch].
The first ML frameworks, [Theano](https://pypi.org/project/Theano/#:~:text=Theano) is a Python library, a similar interface to NumPy's.) by @al2016theano and [Caffe](https://caffe.berkeleyvision.org/) by @jia2014caffe, were developed by academic institutions (Montreal Institute for Learning Algorithms, Berkeley Vision and Learning Center). Amid growing interest in deep learning due to state-of-the-art performance of AlexNet @krizhevsky2012imagenet on the ImageNet dataset, private companies and individuals began developing ML frameworks, resulting in frameworks such as [Keras](https://keras.io/) by @chollet2018keras, [Chainer](https://chainer.org/) by @tokui2015chainer, TensorFlow from Google [@abadi2016tensorflow], [CNTK](https://learn.microsoft.com/en-us/cognitive-toolkit/) by Microsoft [@seide2016cntk], and PyTorch by Facebook [@paszke2019pytorch].

Many of these ML frameworks can be divided into high-level vs. low-level frameworks and static vs. dynamic computational graph frameworks. High-level frameworks provide a higher level of abstraction than low-level frameworks. High-level frameworks have pre-built functions and modules for common ML tasks, such as creating, training, and evaluating common ML models, preprocessing data, engineering features, and visualizing data, which low-level frameworks do not have. Thus, high-level frameworks may be easier to use but are less customizable than low-level frameworks (i.e., users of low-level frameworks can define custom layers, loss functions, optimization algorithms, etc.). Examples of high-level frameworks include TensorFlow/Keras and PyTorch. Examples of low-level ML frameworks include TensorFlow with low-level APIs, Theano, Caffe, Chainer, and CNTK.

Expand Down Expand Up @@ -417,7 +417,7 @@ On the other hand, [Tensor Processing Units](https://cloud.google.com/tpu/docs/i

While TPUs can drastically reduce training times, they also have disadvantages. For example, many operations within the machine learning frameworks (primarily TensorFlow here since the TPU directly integrates with it) are not supported by TPUs. They cannot also support custom operations from the machine learning frameworks, and the network design must closely align with the hardware capabilities.

Today, NVIDIA GPUs dominate training, aided by software libraries like [CUDA](https://developer.nvidia.com/cuda-toolkit), [cuDNN](https://developer.nvidia.com/cudnn), and [TensorRT.](<https://developer.nvidia.com/tensorrt#:~:text=NVIDIA> TensorRT-LLM is an,knowledge of C++ or CUDA.) Frameworks also include optimizations to maximize performance on these hardware types, like pruning unimportant connections and fusing layers. Combining these techniques with hardware acceleration provides greater efficiency. For inference, hardware is increasingly moving towards optimized ASICs and SoCs. Google's TPUs accelerate models in data centers. Apple, Qualcomm, and others now produce AI-focused mobile chips. The NVIDIA Jetson family targets autonomous robots.
Today, NVIDIA GPUs dominate training, aided by software libraries like [CUDA](https://developer.nvidia.com/cuda-toolkit), [cuDNN](https://developer.nvidia.com/cudnn), and [TensorRT.](https://developer.nvidia.com/tensorrt#:~:text=NVIDIA) TensorRT-LLM is an,knowledge of C++ or CUDA.) Frameworks also include optimizations to maximize performance on these hardware types, like pruning unimportant connections and fusing layers. Combining these techniques with hardware acceleration provides greater efficiency. For inference, hardware is increasingly moving towards optimized ASICs and SoCs. Google's TPUs accelerate models in data centers. Apple, Qualcomm, and others now produce AI-focused mobile chips. The NVIDIA Jetson family targets autonomous robots.

![Companies offering ML hardware accelerators. Credit: [Gradient Flow.](https://gradientflow.com/one-simple-chart-companies-that-offer-deep-neural-network-accelerators/)](images/png/hardware_accelerator.png){#fig-hardware-accelerator}

Expand Down
2 changes: 1 addition & 1 deletion contents/labs/seeed/xiao_esp32s3/kws/kws.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,7 @@ If you want to understand what is happening "under the hood, " you can download

![](https://hackster.imgix.net/uploads/attachments/1595193/image_wi6KMb5EcS.png?auto=compress%2Cformat&w=740&h=555&fit=max)

This CoLab Notebook can explain how you can go further: [KWS Classifier Project - Looking “Under the hood](<https://colab.research.google.com/github/Mjrovai/XIAO-ESP32S3-Sense/blob/main/KWS> Training/xiao_esp32s3_keyword_spotting_project_nn_classifier.ipynb).”
This CoLab Notebook can explain how you can go further: [KWS Classifier Project - Looking “Under the hood](https://colab.research.google.com/github/Mjrovai/XIAO-ESP32S3-Sense/blob/main/KWS) Training/xiao_esp32s3_keyword_spotting_project_nn_classifier.ipynb).”

## Testing

Expand Down
2 changes: 1 addition & 1 deletion contents/privacy_security/privacy_security.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -679,7 +679,7 @@ Techniques like de-identification, aggregation, anonymization, and federation ca

Many embedded ML applications handle sensitive user data under HIPAA, GDPR, and CCPA regulations. Understanding the protections mandated by these laws is crucial for building compliant systems.

* [HIPAA](<<https://www.hhs.gov/hipaa/for-professionals/privacy/index.html>) Privacy Rule establishes care providers that conduct certain governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA.
* [HIPAA](https://www.hhs.gov/hipaa/for-professionals/privacy/index.html) Privacy Rule establishes care providers that conduct certain governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA.

* [GDPR](https://gdpr-info.eu/) imposes transparency, retention limits, and user rights on EU citizen data, even when processed by companies outside the EU. Smart home systems capturing family conversations or location patterns would need GDPR compliance. Key requirements include data minimization, encryption, and mechanisms for consent and erasure.

Expand Down

0 comments on commit 6e333a0

Please sign in to comment.