Approach to save structured information for each interaction with user | SK Python Implementation #8409
-
Hello Team, I am working on a use case(SK Python Implementation) where information needs to be gather from user like Field1, Field2, Field 3 and field 4. Once all fields are received, need to run another function that uses Field1, Field2, Field3 and Field4 and would need to make internal 2-3 API calls before generating final result to User. Once inputs are received from User, details of respective fields should be stored somewhere temporarily. If i use LLM to save the information temporarily in random variable, it does not remember the previous field values after running 2-3 instructions. What should be the preferred way to store the information so that it is easier to refer at later steps. I tried exploring SemanticTextMemory but it involves creating embeddings as well. For my use case, there is no requirement to get embeddings created. We just need a way to store the structured data temporarily till the session is running. Can you please guide on the preferred approach that can be taken. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I did a super quick and dirty implementation of this: import asyncio
from typing import Annotated
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings
from semantic_kernel.contents.chat_history import ChatHistory
from semantic_kernel.functions import kernel_function
from semantic_kernel.kernel import Kernel
kernel = Kernel()
system_prompt = """
You are a chatbot that guides the user through gathering information to fill out a form.
The fields are name, email, phone, and address.
When all of those are gathered, call the appropriate tool to fill out the form.
Keep asking questions until you have all the info.
"""
history = ChatHistory(system_prompt=system_prompt)
kernel.add_service(OpenAIChatCompletion(service_id="chat"))
kernel.add_function(
plugin_name="chat",
function_name="chat",
prompt="{{$chat_history}}{{$user_input}}",
prompt_execution_settings=PromptExecutionSettings(
service_id="chat",
function_choice_behavior=FunctionChoiceBehavior.Auto(auto_invoke=True),
),
)
@kernel_function
def perform_api_call(
name: Annotated[str, "Name of the user"],
email: Annotated[str, "Email of the user"],
phone: Annotated[str, "Phone number of the user"],
address: Annotated[str, "Address of the user"],
) -> Annotated[str, "Status message"]:
"""Do the call to the form, make sure all values are set and supplied by the user."""
print("Function called with:")
print(f"Name: {name}")
print(f"Email: {email}")
print(f"Phone: {phone}")
print(f"Address: {address}")
return "Success"
kernel.add_function(plugin_name="process", function=perform_api_call)
async def main():
user_input = "Hi, I need to fill out a form. Can you help me?"
while user_input != "exit":
response = await kernel.invoke(
plugin_name="chat", function_name="chat", user_input=user_input, chat_history=history
)
print(response)
history.add_user_message(user_input)
history.add_message(response.value[0])
user_input = input("User:> ")
print("\n\nExiting chat...")
if __name__ == "__main__":
asyncio.run(main()) So this uses auto invoke function calling and regular models (tested with gpt-40-mini). Hope this already helps! |
Beta Was this translation helpful? Give feedback.
-
To add, this also provides a way to do something similar: https://learn.microsoft.com/en-us/semantic-kernel/concepts/plugins/adding-native-plugins?pivots=programming-language-python#defining-a-plugin-using-a-class |
Beta Was this translation helpful? Give feedback.
I did a super quick and dirty implementation of this: