The repo contains related learnings.
- The course provides a good overview and many details of the stages of a generative ai application including LLM selection, fine tuning, optimization, human alignment, and leveraging orchestration frameworks for external data and tools.
- The LLMs are a starting point. Much more is involved in getting the model to generate the desired output, and building the application around it.
- Reinforced the fact that a model is a black box with an input and an output, with the parameters/weights that govern the output. Three levers exist to produce the desired output:
- Tweak the input through prompt engineering and tuning.
- Tweak the model through training and feedback (pre-training, fine tuning, human alignment, constitutional rules, data alignment, etc.).
- Use external data and tools.
- External tools, data, and orchestration frameworks play a critical role.
- While there is significant progress to make it easier and more accessible, the bar is still high in terms of ML expertise ML and the necessary hardware is still not broadly accessible. Broader ML adoption requires more easy-to-use tools as well as accessible hardware.
The blog post Thoughts from a Generative AI Course has additional thoughts.