You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And we want to fine-tune them on our custom Magistrala, Prism and Cocos repositories, so that we can enhance their intelligence for code generation for our purposes.
We want to compare:
Which is better to fine-tune - more documented, easier, faster, etc ...
drasko
changed the title
Feature: Fine-tune Phi-3 and/or TinyLlama models for Magistrala and Prism codebase
Feature: Fine-tune Phi-3 and TinyLlama models for Magistrala and Prism codebase
Oct 9, 2024
JeffMboya
changed the title
Feature: Fine-tune Phi-3 and TinyLlama models for Magistrala and Prism codebase
Feature: Fine-tune TinyLlama and Qwen2.5-coder models for Magistrala and Prism codebase
Oct 14, 2024
Is your feature request related to a problem? Please describe.
No
Describe the feature you are requesting, as well as the possible use case(s) for it.
As LLM can fine-tuned on custom data sets, so can SLMs.
We want to fine-tune:
And we want to fine-tune them on our custom Magistrala, Prism and Cocos repositories, so that we can enhance their intelligence for code generation for our purposes.
We want to compare:
Some references:
Analysis should be done if we should use fine-tuning or RAG for this purpose: https://medium.com/@bijit211987/when-to-apply-rag-vs-fine-tuning-90a34e7d6d25
Indicate the importance of this feature to you.
Must-have
Anything else?
No response
The text was updated successfully, but these errors were encountered: