-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prerelease 0.2.0rc1 no longer returns log probs with llama.cpp #1064
Comments
Hi @oj-sec thanks for bringing this up, and our apologies if it's impacting your workflow. The new parser that we're using in Thank you for submitting the issue! |
@hudson-ai Since 0.2.0 has released and the visualization seems to display probabilities, can we expect this issue to get fixed (soon)? |
Hi @woitee, sorry for the late reply. We're currently displaying probabilities for outputted tokens, but we don't yet have a satisfactory solution for mapping these back to probabilities for a given capture (which is a string that may or may not align to token boundaries). Getting this working again is on the roadmap, but I don't yet have a timeline for you. Would you mind giving me an idea of how you like to use this feature in practice? More example usages may help motivate a solution :) |
@hudson-ai Thanks for the reply! The main thing I need this for is categorization, I use guidance for classifying e-mails and free-text forms into categories, and I used to provide users with information on how "certain" AI is in these, based on probabilities. If it worked only for Can I access the token probabilities? I might be able to arrange my systems such that a token definitely ends before the select clause and different tokens are at the starts of options - and create some work arounds :) |
The bug
Updating from guidance==0.1.16 to prerelease guidance==0.2.0rc1 causes model.log_prob() to return 0 rather than the true log probs for a generation when using the llama.cpp backend. I have tested GGUF quants of models based on Llama, Mistral and Gemma and observed this behaviour to be model agnostic.
To Reproduce
Reproduction Colab notebook here - involves uninstalling and reinstalling Guidance, but the change in output between installs is:
System info (please complete the following information):
guidance.__version__
): 0.2.0rc1 (both from pypi and from https://github.com/microsoft/guidance.git@77fc3999e1545c10f17e6da1b6cbd1feeaa1ca1a)If I can provide any further info please let me know. Huge thanks for this amazing library.
The text was updated successfully, but these errors were encountered: