This repo is part of the Certified Cloud Native Applied Generative AI Engineer program. It covers the fifth quarter of the course work:
This comprehensive course is designed to guide learners through the process of fine-tuning open-source Large Language Models (LLMs) such as Meta LLaMA 3.1 using PyTorch, with a particular emphasis on cloud-native training and deployment. The course covers everything from the fundamentals to advanced concepts, ensuring students acquire both theoretical knowledge and practical skills. The journey begins with an introduction to LLMs, focusing on their architecture, capabilities, and the specific features of Meta LLaMA 3.1. Students will also set up their development environment, including tools like Anaconda, Jupyter Notebooks, and PyTorch, to prepare for hands-on learning.
Fine-tuning Meta LLaMA 3 with PyTorch forms a significant part of the course. Students will delve into the architecture of Meta LLaMA 3.1, learn how to load pre-trained models, and apply fine-tuning techniques.
The course culminates in a capstone project, where students apply all the skills they have learned to fine-tune and deploy Meta LLaMA 3.1 on a chosen platform. This project allows students to demonstrate their understanding and proficiency in the entire process, from data preparation to cloud-native deployment.
Introducing Llama 3.1: Our most capable models to date
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
Introducing Llama-3-Groq-Tool-Use Models
Llama 3 Groq 8B Tool Use - Install and Do Actual Function Calling Locally
Superfast RAG with Llama 3 and Groq
How to Fine Tune Llama 3 LLM (or) any LLM in Colab | PEFT | Unsloth