-
Notifications
You must be signed in to change notification settings - Fork 38
migrate PR [LLM Runtime]Magicoder graph #41
Conversation
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -178,6 +181,11 @@ def loadHFTransformerJson(model: 'LazyModel', config_path: Path) -> 'Params': | |||
ffn_hidden_size = config["intermediate_size"] | |||
rms_norm_eps = config["rms_norm_eps"] | |||
rope_theta = config["rope_theta"] if "rope_theta" in config else 10000 | |||
rope_scale = 1 | |||
if config["rope_scaling"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check whether "rope_scaling" in config
if config["rope_scaling"]: | |
if "rope_scaling" in config and config["rope_scaling"] is not None: |
@@ -179,6 +180,8 @@ def loadHFTransformerJson(model: 'LazyModel', config_path: Path) -> 'Params': | |||
ffn_hidden_size = config["intermediate_size"] | |||
rms_norm_eps = config["rms_norm_eps"] | |||
rope_theta = config["rope_theta"] if "rope_theta" in config else 10000 | |||
rope_scale = config["factor"] if "factor" in config else 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mistral should align to llama
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please help update convert convert_quantized_llama.py and convert_quantized_mistral.py
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
quant_script="./build/bin/quant_llama" | ||
convert_script="${convert_script}/convert_bmagicoder.py" | ||
convert_script="${convert_script}/convert_llama.py" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why use llama?
Type of Change
feature or bug fix or documentation or others
API changed or not
Description
detail description
Issues: xxx
Expected Behavior & Potential Risk
the expected behavior that triggered by this PR
How has this PR been tested?
how to reproduce the test (including hardware information)
Dependency Change?
any library dependency introduced or removed