Semantic Kernel use of Vector Store vs LLM #3164
Closed
dsteketee76
started this conversation in
General
Replies: 1 comment
-
I had this issue, but found that when using the following in SK plugins invoked from my native plugins:
That I needed to specify the correct parameters: variables = new ContextVariables {
[TextMemoryPlugin.CollectionParam] = topic.AsSha256Hex(),
[TextMemoryPlugin.LimitParam] = "15", // Tune this up or down
[TextMemoryPlugin.RelevanceParam] = "0.60",
[nameof(topic)] = topic,
[nameof(heading)] = heading,
[nameof(previous)] = previous
};
var enrichListItem = _kernel.Functions.GetFunction(
"ListGeneratorPlugin",
"EnrichListItem"
);
var enriched = await _kernel.RunAsync(variables, enrichListItem); Basically, without explicitly specifying the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to better understand how the SK incorporates content from a vector store (either volatile or persistent) vs from the core LLM. I've observed responses which are intended to interrogate content from embeddings return content from the core LLM as well as actively deny the presence of content from the embeddings. So, I'm curious what drives this behavior and if there is any way to address it.
Beta Was this translation helpful? Give feedback.
All reactions