-
Notifications
You must be signed in to change notification settings - Fork 170
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Simplified the training section and added transfer learning
- Loading branch information
1 parent
eeefade
commit 5ab11a7
Showing
1 changed file
with
68 additions
and
65 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,85 +1,88 @@ | ||
# AI Training | ||
|
||
<!-- | ||
## Model Selection and Development | ||
- Overview of ML Models | ||
- Criteria for Model Selection | ||
- Model Development Considerations in Embedded Systems | ||
- Scalability and Resource Optimization | ||
## Introduction | ||
|
||
## Hyperparameter Tuning | ||
- Understanding Hyperparameters | ||
- Techniques for Hyperparameter Tuning | ||
- Tuning for Embedded Systems | ||
- Grid Search and Randomized Search Methods | ||
Explanation: An introductory section sets the stage for the reader, explaining what AI training is and why it's crucial, especially in the context of embedded systems. It helps to align the reader's expectations and prepares them for the upcoming content. | ||
|
||
## Limited training data - transfer learning | ||
## Federated learning | ||
## | ||
- Brief overview of what AI training entails | ||
- Importance of training in the context of embedded AI | ||
|
||
--> | ||
## Types of Training | ||
|
||
## Introduction | ||
Explanation: Understanding the different types of training methods is foundational. It allows the reader to appreciate the diversity of approaches and to select the most appropriate one for their specific embedded AI application. | ||
|
||
- Importance of ML Training | ||
- Overview of ML Training Process | ||
- Supervised Learning | ||
- Unsupervised Learning | ||
- Reinforcement Learning | ||
- Semi-supervised Learning | ||
|
||
## Data Preparation | ||
|
||
- Data Collection | ||
- Data Cleaning | ||
- Data Augmentation | ||
- Feature Engineering | ||
- Splitting the Data (Training, Validation, and Test Sets) | ||
|
||
## Model Selection | ||
Explanation: Data is the fuel for AI. This section is essential because it guides the reader through the initial steps of gathering and preparing data, which is a prerequisite for effective training. | ||
|
||
- Overview of Model Types | ||
- Criteria for Model Selection | ||
- Model Complexity and Capacity | ||
- Data Collection | ||
- Data Annotation | ||
- Data Augmentation | ||
- Data Preprocessing | ||
|
||
## Training Algorithms | ||
|
||
Explanation: This section delves into the algorithms that power the training process. It's crucial for understanding how models learn from data and how to implement these algorithms efficiently in embedded systems. | ||
|
||
- Gradient Descent | ||
- Batch Gradient Descent | ||
- Stochastic Gradient Descent | ||
- Mini-Batch Gradient Descent | ||
- Optimization Algorithms | ||
- Adam | ||
- RMSprop | ||
- Momentum | ||
|
||
## Loss Functions | ||
- Mean Squared Error (MSE) | ||
- Cross-Entropy Loss | ||
- Huber Loss | ||
- Custom Loss Functions | ||
|
||
## Regularization Techniques | ||
- L1 and L2 Regularization | ||
- Dropout | ||
- Batch Normalization | ||
- Early Stopping | ||
|
||
## Model Evaluation | ||
- Evaluation Metrics | ||
- Accuracy | ||
- Precision and Recall | ||
- F1-Score | ||
- Confusion Matrix | ||
- ROC and AUC | ||
- Backpropagation | ||
- Optimizers (SGD, Adam, RMSprop, etc.) | ||
|
||
## Training Environments | ||
|
||
Explanation: Different training environments have their own pros and cons. This section helps the reader make informed decisions about where to train their models, considering factors like computational resources and latency. | ||
|
||
- Local vs. Cloud | ||
- Specialized Hardware (GPUs, TPUs, etc.) | ||
|
||
## Hyperparameter Tuning | ||
- Grid Search | ||
- Random Search | ||
- Bayesian Optimization | ||
|
||
## Scaling Up Training | ||
- Parallel Training | ||
- Distributed Training | ||
- Training with GPUs | ||
Explanation: Hyperparameters can significantly impact the performance of a trained model. This section educates the reader on how to fine-tune these settings for optimal results, which is especially important for resource-constrained embedded systems. | ||
|
||
- Learning Rate | ||
- Batch Size | ||
- Number of Epochs | ||
- Regularization Techniques | ||
|
||
## Evaluation Metrics | ||
|
||
Explanation: Knowing how to evaluate a model's performance is crucial. This section introduces metrics that help in assessing how well the model will perform in real-world embedded applications. | ||
|
||
- Accuracy | ||
- Precision and Recall | ||
- F1 Score | ||
- ROC and AUC | ||
|
||
## Overfitting and Underfitting | ||
|
||
Explanation: Overfitting and underfitting are common pitfalls in AI training. This section is vital for teaching strategies to avoid these issues, ensuring that the model generalizes well to new, unseen data. | ||
|
||
- Techniques to Avoid Overfitting (Dropout, Early Stopping, etc.) | ||
- Understanding Underfitting and How to Address It | ||
|
||
## Transfer Learning | ||
|
||
Explanation: Transfer learning can save time and computational resources, which is particularly beneficial for embedded systems. This section explains how to leverage pre-trained models for new tasks. | ||
|
||
- Basics of Transfer Learning | ||
- Applications in Embedded AI | ||
|
||
## Challenges and Best Practices | ||
|
||
Explanation: Every technology comes with its own set of challenges. This section prepares the reader for potential hurdles in AI training, offering best practices to navigate them effectively. | ||
|
||
- Computational Constraints | ||
- Data Privacy | ||
- Ethical Considerations | ||
|
||
## Conclusion | ||
|
||
## Model Cards | ||
Explanation: A summary helps to consolidate the key points of the chapter, aiding in better retention and understanding of the material. | ||
|
||
## Conclusion | ||
- Key Takeaways | ||
- Future Trends in AI Training for Embedded Systems |