Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need help setting headers to azure open ai #1557

Open
chandan84 opened this issue Feb 20, 2025 · 4 comments
Open

Need help setting headers to azure open ai #1557

chandan84 opened this issue Feb 20, 2025 · 4 comments
Labels

Comments

@chandan84
Copy link

chandan84 commented Feb 20, 2025

Git provider (optional)

github

System Info (optional)

##Model used:
azure gpt40-mini
##Deployment Type:
cli, github workflow action

Issues details

excerpt from client code

settings = get_settings()
settings.set("CONFIG.git_provider", provider)
settings.set("github.user_token", user_token)
settings.set("azure.azure_ad_token", azure_token)
settings.set("openai.api_type", openai_api_type)
settings.set("openai.api_version", openai_api_version)
settings.set("openai.api_base", openai_api_base)
settings.set("openai.deployment_id", openai_deployment_id)
settings.set("openai.default_headers", '{"projectId": <projectid-val>}')

exception -

litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - 'projectId' is missing in the header

do I set pass values to default_headers while using azure open ai. can the same be done setting a env variable as well?

@chandan84 chandan84 added the general label Feb 20, 2025
@mrT23
Copy link
Collaborator

mrT23 commented Feb 20, 2025

don't double-post issues.

i dont know what is 'default_headers'. why do you need those ?
https://docs.litellm.ai/docs/providers/azure

follow first what is recommended here:
https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/#azure

does it work for you ? if not, what is the error message ?

@chandan84
Copy link
Author

chandan84 commented Feb 20, 2025

Thanks for the reply.

Following is the pr client that we want to get working

from pr_agent import cli
from pr_agent.config_loader import get_settings
...

def main():
    # Fill in the following values
    provider = "github"
	
    user_token = ".." 
    openai_key = ".."
    openai_api_type = "azure"
    openai_api_version = ".."
    
    openai_api_base = ".."
    openai_deployment_id = ".."
    

    azure_token = get_azure_token()

    pr_url = ".."
    pr_commands = [
                    "/review"
                  ]  # Command to run

    # Setting the configurations
    settings = get_settings()
    settings.set("config.git_provider", provider)
    settings.set("github.user_token", user_token)
    settings.set("azure.azure_ad_token", azure_token)
    settings.set("azure.tenant_id", tenant_id)
    settings.set("azure.client_id", client_id)
    settings.set("azure.client_secret", client_secret)
    settings.set("openai.api_type", openai_api_type)
    settings.set("openai.api_version", openai_api_version)
    settings.set("openai.api_base", openai_api_base)
    settings.set("openai.deployment_id", openai_deployment_id)
    

    # Run the command. Feedback will appear in GitHub PR comments
    for command in pr_commands:
        cli.run_command(pr_url, command)

if __name__ == '__main__':    
    main()

am trying to update the cli so that we can pass a custom header called projectId header for pr agent to use while calling the azure open ai model. The doc (https://docs.litellm.ai/docs/providers/azure) you had shared, mentions that that litellm takes a parameter called extra_headers and using that I was able to pass the required header and invoke the service without issues.

litellm client

from litellm import completion
..

def invokepragentllm():
    deployment_name = ".."
    azure_token = get_azure_token();
    api_version = ".."
    azure_endpoint = ".."

    prompt = ".."

    response = completion(
         model="..",
         api_base=azure_endpoint,
         api_version=api_version,
         deployment_id=deployment_name,
         api_key=azure_token,
         messages=[{"role": "user", "content": prompt}],
         timeout=10.0,
         max_retries=2,
         temperature=0.2,
         max_tokens=1000,
         extra_headers={
              "projectId": "<projectId>"
         }
    )
    print(prompt + response.choices[0].message['content'])

Can i set something in from pr_agent.config_loader import get_settings
to pass the required header as well.

the exception I get without it is

File "/codepath/pr-env/Lib/site-packages/litellm/llms/azure.py", line 477, in make_azure_openai_chat_completion_request
    raw_response = await azure_client.chat.completions.with_raw_response.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/codepath/pr-env/Lib/site-packages/openai/_legacy_response.py", line 381, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
								  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/codepath/pr-env/Lib/site-packages/openai/resources/chat/completions/completions.py", line 1927, in create
return await self._post(
	   ^^^^^^^^^^^^^^^^^
File "/codepath/pr-env/Lib/site-packages/openai/_base_client.py", line 1856, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
	   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/codepath/pr-env/Lib/site-packages/openai/_base_client.py", line 1550, in request
return await self._request(
	   ^^^^^^^^^^^^^^^^^^^^
File "/codepath/pr-env/Lib/site-packages/openai/_base_client.py", line 1651, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'statusCode': 400, 'message': "'projectId' is missing in the header"}

The error message above is because of a custom wrapper around the azure open ai model service, but the idea is that we would need to pass custom headers while calling the api

@chandan84 chandan84 changed the title Need help setting default_headers to azure open ai Need help setting headers to azure open ai Feb 21, 2025
@mrT23
Copy link
Collaborator

mrT23 commented Feb 21, 2025

you are welcome to open a PR to:
https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/algo/ai_handlers/litellm_ai_handler.py

and add support there also for this option (and also update the relevant documentation)

@chandan84
Copy link
Author

Thank your for your reply. I added #1564, once the PR is accepted,
would the docker image available at codiumai/pr-agent:github_action reflect the latest changes ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants