Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TypeError: can only concatenate str (not "dict") to str #6958

Open
linsida1 opened this issue Nov 28, 2024 · 4 comments · May be fixed by #6989
Open

[Bug]: TypeError: can only concatenate str (not "dict") to str #6958

linsida1 opened this issue Nov 28, 2024 · 4 comments · May be fixed by #6989
Labels
awaiting: user response bug Something isn't working

Comments

@linsida1
Copy link

What happened?

environment

  • autogen 0.4
  • litellm 1.53.1
  • ollama version is 0.3.14
  • ollama model is qwen2.5:14b-instruct-q4_K_M.

Infomation

I use autogen+litellm+ollama for my local test.
when doing some tool_call, litellm raise ERROR in token_counter method : TypeError: can only concatenate str (not "dict") to str.

I debug it with vscode , I found the function_arguments is dict , not str.

Can some one help check it is a bug ?
Can just change it from (which at line 1638 of utils.py file):
text += function_arguments
to
text += str(function_arguments)

Relevant log output

Error processing publish message
Traceback (most recent call last):
  File ".venv/lib/python3.11/site-packages/litellm/main.py", line 481, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 612, in ollama_acompletion
    raise e  # don't use verbose_logger.exception, if exception is raised
    ^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 594, in ollama_acompletion
    prompt_tokens = response_json.get("prompt_eval_count", litellm.token_counter(messages=data["messages"]))  # type: ignore
                                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/utils.py", line 1638, in token_counter
    text += function_arguments
TypeError: can only concatenate str (not "dict") to str


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".venv/lib/python3.11/site-packages/autogen_core/application/_single_threaded_agent_runtime.py", line 402, in _process_publish
    await asyncio.gather(*responses)
  File ".venv/lib/python3.11/site-packages/autogen_core/application/_single_threaded_agent_runtime.py", line 394, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/autogen_core/components/_routed_agent.py", line 484, in on_message
    return await h(self, message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/autogen_core/components/_routed_agent.py", line 148, in wrapper
    return_value = await func(self, message, ctx)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/autogen_service/ag_core/hand_offs.py", line 57, in handle_task
    llm_result = await self._model_client.create(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/autogen_service/ag_exts/models/litellm/_litellm_client.py", line 432, in create
    result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future
                                                                     ^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/utils.py", line 1175, in wrapper_async
    raise e
  File ".venv/lib/python3.11/site-packages/litellm/utils.py", line 1031, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/main.py", line 503, in acompletion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2136, in exception_type
    raise e
  File ".venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2112, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: can only concatenate str (not "dict") to str
Traceback (most recent call last):
  File ".venv/lib/python3.11/site-packages/litellm/main.py", line 481, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 612, in ollama_acompletion
    raise e  # don't use verbose_logger.exception, if exception is raised
    ^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 594, in ollama_acompletion
    prompt_tokens = response_json.get("prompt_eval_count", litellm.token_counter(messages=data["messages"]))  # type: ignore
                                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/litellm/utils.py", line 1638, in token_counter
    text += function_arguments
TypeError: can only concatenate str (not "dict") to str

Twitter / LinkedIn details

No response

@linsida1 linsida1 added the bug Something isn't working label Nov 28, 2024
@krrishdholakia
Copy link
Contributor

can you share the request being made to litellm for repro?

@angelnu
Copy link

angelnu commented Dec 1, 2024

I also got the same issue using ollama_chat/llama3.1 running locally (Meta Llama 3.1 8B Instruct). Interestingly ollama/llama3.1 did not have the same issue - I expect because it does not track consumed tokens.

Adding str(function_arguments) solved the issue.

@linsida1
Copy link
Author

linsida1 commented Dec 2, 2024

can you share the request being made to litellm for repro?

@krrishdholakia yes . I make a simple demo to reproduce this error.
please check itl,Thank you.

litellm_ollama_chat_test.txt

@aantn
Copy link

aantn commented Jan 13, 2025

I think we're hitting this too as a downstream user of litellm - see robusta-dev/holmesgpt#246
Any ETA on getting it fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting: user response bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants