Skip to content

Commit

Permalink
Include x-coding-assistant=aider header in litellm calls
Browse files Browse the repository at this point in the history
Proxies that inspect traffic between the development environment and an
LLM might be interested in whether it's aider or another tool calling in
order to be able to inspect and/or modify the payload.

The most common way of solving this would be to add a `user-agent` header.
However, litellm which aider uses calls into OpenAI libraries directly
when making the request and it seems like the only way to set a custom
`http_client`. This seemed like something that might have unforeseen
consequences (timeouts? retries?). For other LLMs, litellm seems to use
its own httpx wrapper which might presumably be easier to customize, but
I have not tried.

To make things easier, let's just add an aider specific header. I put the
string aider followed by the version there, but the value - and indeed the key
- of the header are not that interesting, what I would like to do is to just
be able to to tell aider calls.
  • Loading branch information
jhrozek committed Jan 31, 2025
1 parent 778e54e commit 643454d
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 0 deletions.
1 change: 1 addition & 0 deletions aider/coders/base_coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -1205,6 +1205,7 @@ def warm_cache_worker():

kwargs = dict(self.main_model.extra_params) or dict()
kwargs["max_tokens"] = 1
kwargs["headers"] = {"x-coding-assistant": f"aider-{__version__}"}

try:
completion = litellm.completion(
Expand Down
2 changes: 2 additions & 0 deletions aider/sendchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import os
import time

from aider._version import __version__
from aider.dump import dump # noqa: F401
from aider.exceptions import LiteLLMExceptions
from aider.llm import litellm
Expand Down Expand Up @@ -99,6 +100,7 @@ def send_completion(
model=model_name,
messages=messages,
stream=stream,
headers={"x-coding-assistant": f"aider-{__version__}"},
)
if temperature is not None:
kwargs["temperature"] = temperature
Expand Down
15 changes: 15 additions & 0 deletions tests/basic/test_sendchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,21 @@ def test_send_completion_with_functions(self, mock_completion):
assert "tools" in called_kwargs
assert called_kwargs["tools"][0]["function"] == mock_function

@patch("litellm.completion")
def test_send_completion_aider_specific_header(self, mock_completion):
_, _ = send_completion(
self.mock_model,
self.mock_messages,
functions=None,
stream=False,
)

# Verify the aider specific header was sent
called_kwargs = mock_completion.call_args.kwargs
assert "headers" in called_kwargs
assert "x-coding-assistant" in called_kwargs["headers"]
assert "aider" in called_kwargs["headers"]["x-coding-assistant"]

@patch("litellm.completion")
def test_simple_send_attribute_error(self, mock_completion):
# Setup mock to raise AttributeError
Expand Down

0 comments on commit 643454d

Please sign in to comment.