-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: blank model name in settings cause problem #5169
Comments
I didn't know it's possible to have no model name :D . That's an interesting configuration. |
I put aaa as provider and ccc as model name, then the same error.
|
The settings of the Openhands is so hard to understand. Why it defines several available providers and models, and even there is a list of models when choosing ollama as provider? Developers using ollama always define their own models, or try the newest models eg. qwen2.5. Ollama works with these models very well, but Openhands limits lots of scenarios. As my understanding of Openhands, it should be a simple prompt agent which is responsible for translating code instructions to prompt words and then send to any backend LLM provider, ChatGPT, Claude, X-AI, .... and locally Ollama, llama.cpp, or other funny agents... However, obviously I was wrong. Could you please let me know where I was wrong?? |
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
When I create a local LLM service with llama.cpp, I have verified it works great without any key or model name. However, when I set the same URL, Openhands always shows error in console of docker.
I am considering users have no chance to use local model except Ollama?
OpenHands Installation
Docker command in README
OpenHands Version
latest docker image
Operating System
Linux
Logs, Errors, Screenshots, and Additional Context
The text was updated successfully, but these errors were encountered: