We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In the code, the tensor is forcibly cast to float32, while cos and sin may be inconsistent with the tensor due to the model type being bfloat16.
This problem occurred when using the flash attention 2 & bf16 Qwen2.5VL model.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
In the code, the tensor is forcibly cast to float32, while cos and sin may be inconsistent with the tensor due to the model type being bfloat16.
This problem occurred when using the flash attention 2 & bf16 Qwen2.5VL model.
The text was updated successfully, but these errors were encountered: