Structured output in lieu of JSON Schema with gpt-4o-mini? #9508
-
I am using SK 1.26 and Azure OpenAI services, and I am trying to use Structured output option (which is recommended by OpenAI), instead of JSON schema, as per https://platform.openai.com/docs/guides/structured-outputs?context=ex2#supported-schemas But if I use settings like below: var executionSettings = new OpenAIPromptExecutionSettings
{
ResponseFormat = typeof(Quiz),
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var function = kernel.CreateFunctionFromPrompt(prompt, executionSettings);
var result = await kernel.InvokeAsync<string>(function,
new KernelArguments
{
{ PluginConstants.GenerateQuiz.DifficultyLevel, request.DifficultyLevel },
{ PluginConstants.GenerateQuiz.GradeLevel, request.GradeLevel },
{ PluginConstants.GenerateQuiz.Subject, request.Subject },
{ PluginConstants.GenerateQuiz.NumQuestions, request.NumberOfQuestions }
}); SK seems to be internally using the old-style JSON schema mode, which is only supported by gpt-4o-* models. If I use gpt-4o-mini, I get a 400. I don't want to use 4o because of cost implications. Any suggestions to work around this (other than using OpenAI directly, which seems to support this as per: https://devblogs.microsoft.com/semantic-kernel/using-json-schema-for-structured-output-in-net-for-openai-models/). Another related question. I like the idea of a plugin defined by skprompt.txt and config.json, instead of long prompt strings in code. Is kernel.ImportPluginFromPromptDirectory(pluginsPath) the only option to load them? I see that there is no option to pass executionSettings to the InvokeAsync() method or while importing plugins from the directory. The other issue is that resolution of path in Azure Web App / Azure Function environments can be a challenge. Any point of view on this? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Tagging @dmytrostruk for visibility. |
Beta Was this translation helpful? Give feedback.
-
@ManojG1978 Thanks for your question! In order to use Structured Outputs, you will need to pass var executionSettings = new OpenAIPromptExecutionSettings
{
ResponseFormat = typeof(Quiz),
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var function = kernel.CreateFunctionFromPrompt(prompt, executionSettings);
var result = await kernel.InvokeAsync<string>(function,
- new KernelArguments()
+ new KernelArguments(executionSettings)
{
{ PluginConstants.GenerateQuiz.DifficultyLevel, request.DifficultyLevel },
{ PluginConstants.GenerateQuiz.GradeLevel, request.GradeLevel },
{ PluginConstants.GenerateQuiz.Subject, request.Subject },
{ PluginConstants.GenerateQuiz.NumQuestions, request.NumberOfQuestions }
});
There is also
You can define the settings in But if you prefer to specify your execution settings in the code, as mentioned above, when you use
As for now, we are not aware about issues related to SK execution in Azure Web App / Azure Function environments, but if you face some issues, please don't hesitate to report them and we will be happy to assist! I hope this information helps. Thanks a lot! |
Beta Was this translation helpful? Give feedback.
@ManojG1978 Thanks for your question!
In order to use Structured Outputs, you will need to pass
executionSettings
inKernelArguments
when you invoke a function, since it looks like passing it toCreateFunctionFromPrompt
method impacts only AI service selection logic, but not the execution. Here is an updated code that should resolve an issue: