Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

配置使用本地ollama模型qwen2:7b不执行,依旧使用原始默认模型 #2012

Open
RLStudy666 opened this issue Jan 15, 2025 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@RLStudy666
Copy link

Describe your question

2025-01-15_151933
2025-01-15_151956
配置的ollama测试连接是成功的,但使用的依旧是原始OpenAI的默认模型,断网后不能使用,进一步验证配置本地模型不成功,希望得到恢复,谢谢
2025-01-15_152011

Your organization

个人

@RLStudy666 RLStudy666 added the question Further information is requested label Jan 15, 2025
@szsuyuji
Copy link

你得把没启用的模型最后一个也要改掉,这是个bug

@RLStudy666
Copy link
Author

你得把没启用的模型最后一个也要改掉,这是个bug

十分感谢您的回复,请问能再详细解答一下吗,意思是把大模型配置里面,所有的模型都改成本地的qwen2,才会起作用,是这样吗?,期待您的回复🙏

@keenki
Copy link

keenki commented Jan 16, 2025

每个应用场景都可以单独设置模型,你应该先点闲聊对话,然后把应用模型改成本地的

@jerryjzhang jerryjzhang self-assigned this Jan 17, 2025
@szsuyuji
Copy link

你得把没启用的模型最后一个也要改掉,这是个bug

十分感谢您的回复,请问能再详细解答一下吗,意思是把大模型配置里面,所有的模型都改成本地的qwen2,才会起作用,是这样吗?,期待您的回复🙏

是的,全部场景改成你本地的,看代码是循环了所有场景的模型设置,最终取了最后一个的值

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants