-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help setting headers to azure open ai #1557
Comments
don't double-post issues. i dont know what is 'default_headers'. why do you need those ? follow first what is recommended here: does it work for you ? if not, what is the error message ? |
Thanks for the reply. Following is the pr client that we want to get working from pr_agent import cli
from pr_agent.config_loader import get_settings
...
def main():
# Fill in the following values
provider = "github"
user_token = ".."
openai_key = ".."
openai_api_type = "azure"
openai_api_version = ".."
openai_api_base = ".."
openai_deployment_id = ".."
azure_token = get_azure_token()
pr_url = ".."
pr_commands = [
"/review"
] # Command to run
# Setting the configurations
settings = get_settings()
settings.set("config.git_provider", provider)
settings.set("github.user_token", user_token)
settings.set("azure.azure_ad_token", azure_token)
settings.set("azure.tenant_id", tenant_id)
settings.set("azure.client_id", client_id)
settings.set("azure.client_secret", client_secret)
settings.set("openai.api_type", openai_api_type)
settings.set("openai.api_version", openai_api_version)
settings.set("openai.api_base", openai_api_base)
settings.set("openai.deployment_id", openai_deployment_id)
# Run the command. Feedback will appear in GitHub PR comments
for command in pr_commands:
cli.run_command(pr_url, command)
if __name__ == '__main__':
main() am trying to update the cli so that we can pass a custom header called projectId header for pr agent to use while calling the azure open ai model. The doc (https://docs.litellm.ai/docs/providers/azure) you had shared, mentions that that litellm takes a parameter called extra_headers and using that I was able to pass the required header and invoke the service without issues. litellm client from litellm import completion
..
def invokepragentllm():
deployment_name = ".."
azure_token = get_azure_token();
api_version = ".."
azure_endpoint = ".."
prompt = ".."
response = completion(
model="..",
api_base=azure_endpoint,
api_version=api_version,
deployment_id=deployment_name,
api_key=azure_token,
messages=[{"role": "user", "content": prompt}],
timeout=10.0,
max_retries=2,
temperature=0.2,
max_tokens=1000,
extra_headers={
"projectId": "<projectId>"
}
)
print(prompt + response.choices[0].message['content']) Can i set something in from pr_agent.config_loader import get_settings the exception I get without it is
The error message above is because of a custom wrapper around the azure open ai model service, but the idea is that we would need to pass custom headers while calling the api |
you are welcome to open a PR to: and add support there also for this option (and also update the relevant documentation) |
Thank your for your reply. I added #1564, once the PR is accepted, |
Git provider (optional)
github
System Info (optional)
##Model used:
azure gpt40-mini
##Deployment Type:
cli, github workflow action
Issues details
excerpt from client code
exception -
do I set pass values to default_headers while using azure open ai. can the same be done setting a env variable as well?
The text was updated successfully, but these errors were encountered: