You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the code needs to be updated to if self.model.config._attn_implementation=='flash_attention_2':
Do i need to change model config to check speed of base model with jacobi iteration?
base model="meta-llama/Meta-Llama-3-8B-Instruct"
The text was updated successfully, but these errors were encountered:
With all respect, the versions in requirements.txt are quite dated (transformers 4.36).
could you please make an effort and find a solution that works with the current transformers version.
Also, please reduce the requirements.txt size from 180 packages to some manageable minimum, and avoid locking to specific version unless critical.
I was running speedup.sh with Llama model but got this issue trace.
The error follows from the file Consistency_LLM/cllm/cllm_llama_modeling.py
Consistency_LLM/cllm/cllm_llama_modeling.py
Line 154 in b2a7283
the code needs to be updated to
if self.model.config._attn_implementation=='flash_attention_2':
Do i need to change model config to check speed of base model with jacobi iteration?
base model="meta-llama/Meta-Llama-3-8B-Instruct"
The text was updated successfully, but these errors were encountered: