Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: blank model name in settings cause problem #5169

Open
1 task done
adrianzhang opened this issue Nov 21, 2024 · 3 comments
Open
1 task done

[Bug]: blank model name in settings cause problem #5169

adrianzhang opened this issue Nov 21, 2024 · 3 comments
Labels
bug Something isn't working severity:low Minor issues or affecting single user

Comments

@adrianzhang
Copy link

Is there an existing issue for the same bug?

  • I have checked the existing issues.

Describe the bug and reproduction steps

When I create a local LLM service with llama.cpp, I have verified it works great without any key or model name. However, when I set the same URL, Openhands always shows error in console of docker.

I am considering users have no chance to use local model except Ollama?

OpenHands Installation

Docker command in README

OpenHands Version

latest docker image

Operating System

Linux

Logs, Errors, Screenshots, and Additional Context

 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 290, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=undefined/undefined
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
07:23:21 - openhands:ERROR: agent_controller.py:209 - [Agent Controller 254da826-91e5-4bfa-b129-7cbd32d093d8] Error while running the agent: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=undefined/undefined
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
07:23:21 - openhands:INFO: agent_controller.py:323 - [Agent Controller 254da826-91e5-4bfa-b129-7cbd32d093d8] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
07:23:21 - openhands:INFO: agent_controller.py:323 - [Agent Controller 254da826-91e5-4bfa-b129-7cbd32d093d8] Setting agent(CodeActAgent) state from AgentState.ERROR to AgentState.ERROR
07:23:21 - OBSERVATION
[Agent Controller 254da826-91e5-4bfa-b129-7cbd32d093d8] AgentStateChangedObservation(content='', agent_state=<AgentState.ERROR: 'error'>, observation='agent_state_changed')
@mamoodi
Copy link
Collaborator

mamoodi commented Nov 21, 2024

I didn't know it's possible to have no model name :D . That's an interesting configuration.
What happens if you just put some random string in the model box?

@mamoodi mamoodi added the severity:low Minor issues or affecting single user label Nov 21, 2024
@adrianzhang
Copy link
Author

I didn't know it's possible to have no model name :D . That's an interesting configuration. What happens if you just put some random string in the model box?

I put aaa as provider and ccc as model name, then the same error.

  File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 290, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aaa/ccc
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
06:07:24 - openhands:ERROR: agent_controller.py:209 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Error while running the agent: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aaa/ccc
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
06:07:24 - openhands:INFO: agent_controller.py:323 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
06:07:24 - openhands:INFO: agent_controller.py:323 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Setting agent(CodeActAgent) state from AgentState.ERROR to AgentState.ERROR
06:07:24 - OBSERVATION
[Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] AgentStateChangedObservation(content='', agent_state=<AgentState.ERROR: 'error'>, observation='agent_state_changed')

@adrianzhang
Copy link
Author

adrianzhang commented Nov 22, 2024

The settings of the Openhands is so hard to understand. Why it defines several available providers and models, and even there is a list of models when choosing ollama as provider? Developers using ollama always define their own models, or try the newest models eg. qwen2.5. Ollama works with these models very well, but Openhands limits lots of scenarios.

As my understanding of Openhands, it should be a simple prompt agent which is responsible for translating code instructions to prompt words and then send to any backend LLM provider, ChatGPT, Claude, X-AI, .... and locally Ollama, llama.cpp, or other funny agents... However, obviously I was wrong. Could you please let me know where I was wrong??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working severity:low Minor issues or affecting single user
Projects
None yet
Development

No branches or pull requests

2 participants