Replies: 1 comment 1 reply
-
The model size displayed on the Hugging Face model page only reflects a portion of the overall model: It does not include the parameters of the Llama model, as we keep them frozen and load them directly from the Llama repository. This is specified in config.json under "text_model_id": "meta-llama/Llama-3.3-70B-Instruct". |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone!
Sorry if this question is too simple, but I'm confused with the number of parameters
ultravox-v0_5-llama-3_3-70b
has. According to the HuggingFace model hub, it has 696M parameters (image below). However, since it was "built around a pretrained Llama3.3-70B-Instruct", shouldn't it have billions of parameters (instead of millions)?Thanks in advance!
All the best,
Bruno
Beta Was this translation helpful? Give feedback.
All reactions