Yi-1.5-9B is not supported by the vllm or lmdeploy acceleration? I can not figure out ,why the evaluation process consume so much time ? when I evaluated in LLamaFactory on MMLU ,the process costs about 30 minutes, while with opencompass it cost at least 2 hours with the same four GPUS #1237
Replies: 3 comments 4 replies
-
同问,我也出现了这个问题WARNING - Unsupported model type opencompass.models.huggingface_above_v4_33.HuggingFacewithChatTemplate, will keep the original model. 在测评qwen2的时候 |
Beta Was this translation helpful? Give feedback.
4 replies
-
Thanks for the report, will update the support for Yi-1.5-9B with inference backend. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Please try the latest version, feel free to re-open if needed |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hardware: 4*4090 ,
(opencompass) ubuntu@ly-rq-214-23-49-1:~/opencompass$ CUDA_VISIBLE_DEVICES=2,3 python run.py --datasets gsm8k_gen --hf-type chat --hf-path /home/ubuntu/.cache/huggingface/hub/models--01-ai--Yi-1.5-9B-Chat-16K/snapshots/2b397e5f0fab87984efa66856c5c4ed4bbe68b50/ --hf-num-gpus 2 --max-seq-len 2048 --max-out-len 256 -a vllm -r --dump-eval-details --model-kwargs device_map='auto' trust_remote_code=True --max-workers-per-gpu 1 --max-num-workers 2
06/13 10:11:06 - OpenCompass - INFO - Loading gsm8k_gen: configs/datasets/gsm8k/gsm8k_gen.py
06/13 10:11:06 - OpenCompass - INFO - Transforming _hf to vllm
06/13 10:11:06 - OpenCompass - WARNING - Unsupported model type opencompass.models.huggingface_above_v4_33.HuggingFacewithChatTemplate, will keep the original model.
Beta Was this translation helpful? Give feedback.
All reactions