-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error using custom imported Llama3.1 model on Bedrock #228
Comments
@mgaionWalit |
Thanks for the reply. There is a temporary workaround? |
@mgaionWalit |
@mgaionWalit |
Hi @3coins thanks for you support. I've updated langchain-aws to the latest available version (0.2.4) and tried to init the custom model with ChatBedrockConverse :
but I still get an error when trying to start a chat with stream that says: Am I doing something wrong? Do you have some suggestions on how to make it work? |
We are using a fine-tuned version of Llama 3.1-instruct, uploaded to Bedrock. Since we are using an ARN model ID (which does not contain any information about the specific Foundation Model used), we encountered an issue.
In the code
chat_models/bedrock.py
at line 349, there is an if statement evaluating the model string to choose between Llama2 and Llama3 for prompt conversion.In our case, we need to use
convert_messages_to_prompt_llama3
, but the logic falls into the else statement, which usesconvert_messages_to_prompt_llama
.Is there any solution to ensure the correct conversion function is used?
Thank you!
The text was updated successfully, but these errors were encountered: