-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Azure OpenAI with AI Assistant #3117
Comments
Is this still true? I thought you could just pass If not, i'd prefer to not update the config if possible (since the keys don't make sense in the non-azure case), and instead parse the URL |
Unfortunately I tried passing that URL directly to the openAI client, but it just doesn't work (I also tried variants leaving off the
Parsing the Azure params from the URL is definitely an option, and makes the config cleaner, though I have no idea how stable the URL format is. I suppose we can assume it's stable for now? Then we can just infer that any |
That's approach is preferable with me (parsing the url) |
Made a pull request that implements the url parsing approach. It works for me. Feel free to review and modify as you see fit. And thanks for building marimo! |
Description
Currently, the AI assistant works with any OpenAI API-compatible endpoint. However, when OpenAI models are deployed on Azure, there is a slightly different initialization strategy. See here:
https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai
The user needs to provide an API version number, then the "AzureOpenAI" client is created instead of a regular OpenAI client (the URL ultimately looks like
https://[XXXX].openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-10-01-preview
). Once the client is initialized, the API calls are the same, so I believe the rest should just work.Personally, at my company we use an Azure deployment instead of OpenAI's own API for privacy/compliance reasons, so it would be nice to support this.
Suggested solution
We can use the
marimo.toml
file to specify which flavor of the API to use, say using anapi_type
variable, e.g.Then when the client is initialized (looks like the
llm.py
file here) we can check theapi_type
in the config, and use the either the AzureOpenAI class or regular OpenAI class accordingly.Alternative
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: