forked from harvard-edge/cs249r_book
-
Notifications
You must be signed in to change notification settings - Fork 0
/
training.qmd
95 lines (59 loc) · 3.47 KB
/
training.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
# AI Training
::: {.callout-note collapse="true"}
## Learning Objectives
* coming soon.
:::
## Introduction
Explanation: An introductory section sets the stage for the reader, explaining what AI training is and why it's crucial, especially in the context of embedded systems. It helps to align the reader's expectations and prepares them for the upcoming content.
- Brief overview of what AI training entails
- Importance of training in the context of embedded AI
## Types of Training
Explanation: Understanding the different types of training methods is foundational. It allows the reader to appreciate the diversity of approaches and to select the most appropriate one for their specific embedded AI application.
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
- Semi-supervised Learning
## Data Preparation
Explanation: Data is the fuel for AI. This section is essential because it guides the reader through the initial steps of gathering and preparing data, which is a prerequisite for effective training.
- Data Collection
- Data Annotation
- Data Augmentation
- Data Preprocessing
## Training Algorithms
Explanation: This section delves into the algorithms that power the training process. It's crucial for understanding how models learn from data and how to implement these algorithms efficiently in embedded systems.
- Gradient Descent
- Backpropagation
- Optimizers (SGD, Adam, RMSprop, etc.)
## Training Environments
Explanation: Different training environments have their own pros and cons. This section helps the reader make informed decisions about where to train their models, considering factors like computational resources and latency.
- Local vs. Cloud
- Specialized Hardware (GPUs, TPUs, etc.)
## Hyperparameter Tuning
Explanation: Hyperparameters can significantly impact the performance of a trained model. This section educates the reader on how to fine-tune these settings for optimal results, which is especially important for resource-constrained embedded systems.
- Learning Rate
- Batch Size
- Number of Epochs
- Regularization Techniques
## Evaluation Metrics
Explanation: Knowing how to evaluate a model's performance is crucial. This section introduces metrics that help in assessing how well the model will perform in real-world embedded applications.
- Accuracy
- Precision and Recall
- F1 Score
- ROC and AUC
## Overfitting and Underfitting
Explanation: Overfitting and underfitting are common pitfalls in AI training. This section is vital for teaching strategies to avoid these issues, ensuring that the model generalizes well to new, unseen data.
- Techniques to Avoid Overfitting (Dropout, Early Stopping, etc.)
- Understanding Underfitting and How to Address It
## Transfer Learning
Explanation: Transfer learning can save time and computational resources, which is particularly beneficial for embedded systems. This section explains how to leverage pre-trained models for new tasks.
- Basics of Transfer Learning
- Applications in Embedded AI
## Challenges and Best Practices
Explanation: Every technology comes with its own set of challenges. This section prepares the reader for potential hurdles in AI training, offering best practices to navigate them effectively.
- Computational Constraints
- Data Privacy
- Ethical Considerations
## Conclusion
Explanation: A summary helps to consolidate the key points of the chapter, aiding in better retention and understanding of the material.
- Key Takeaways
- Future Trends in AI Training for Embedded Systems