Replies: 1 comment
-
This is a great suggestion, thanks for the feedback. Currently we have evaluations for Python (https://learn.microsoft.com/en-us/semantic-kernel/overview/). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'd like to discuss the possibility of adding self-training and evaluation capabilities to Semantic Kernel, similar to what DSPy offers. Specifically:
Is there any consideration on the roadmap for features that would allow:
Automated prompt optimization through testing and evaluation
I've noticed the documentation and recent features (like Processors and Agents) have moved toward a code-first approach, moving away from text-based configuration and prompts. While I understand the benefits of type safety and IDE support, having configurable/optimizable prompts could offer significant advantages for:
Additionally, I see an opportunity for cost optimization through fine-tuning support:
This could help teams start with powerful models for exploration and initial development, then efficiently transition to more cost-effective fine-tuned models for production use.
For context, I'm looking at frameworks like DSPy which treat prompts as trainable parameters rather than static strings. Would Semantic Kernel consider a hybrid approach that maintains the code-first benefits while adding these optimization capabilities?
Happy to provide more specific examples or use cases if helpful.
Also happy to contribute experimental PRs if this aligns with the Semantic Kernel vision.
Beta Was this translation helpful? Give feedback.
All reactions