We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
尝试运行basic_demo下的openai_api_server.py时报错,具体如下:
/GLM-4/basic_demo# pip install -r requirements.txt
from modelscope import snapshot_download model_dir = snapshot_download('ZhipuAI/glm-4-9b-chat-hf')
import os import subprocess # text-model THUDM/glm-4-9b-chat MODEL_PATH = os.environ.get('MODEL_PATH', '/root/autodl-tmp/data/ZhipuAI/glm-4-9b-chat-hf') # vision-model THUDM/glm-4v-9b # MODEL_PATH = os.environ.get('MODEL_PATH', 'THUDM/glm-4v-9b')
/GLM-4/basic_demo# python openai_api_server.py
错误信息如下:
INFO 02-13 10:10:48 __init__.py:190] Automatically detected platform cuda. Traceback (most recent call last): File "/root/autodl-tmp/data/GLM-4/basic_demo/glm_server.py", line 661, in <module> engine_args = AsyncEngineArgs( ^^^^^^^^^^^^^^^^ TypeError: AsyncEngineArgs.__init__() got an unexpected keyword argument 'worker_use_ray'
感谢支持!!
The text was updated successfully, but these errors were encountered:
遇到了相同的问题,请大神解答下。
Sorry, something went wrong.
解决了,请参考截图。哈。
估计是引擎更新了,这个参数没有了
zRzRzRzRzRzRzR
No branches or pull requests
尝试运行basic_demo下的openai_api_server.py时报错,具体如下:
错误信息如下:
感谢支持!!
The text was updated successfully, but these errors were encountered: