Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple backends #2

Open
alilee opened this issue Jan 8, 2025 · 2 comments
Open

Multiple backends #2

alilee opened this issue Jan 8, 2025 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@alilee
Copy link

alilee commented Jan 8, 2025

This is awesome - congrats.

Architecturally, it would be great if you could specify one or more backends and use async to process input that has come back from any of them. Do you think this is how it would be done? Or should you put each llm chat in a discrete async context? In any case, a parallel pattern would be very useful for evals.

Also, do you think it would be possible to make this usable from wasm?

@graniet graniet self-assigned this Jan 8, 2025
@graniet
Copy link
Owner

graniet commented Jan 8, 2025

Thank you for your issue,

This branch adds an evaluator for different LLM models (does this match your request?): #3

As for the async part, a feature will be coming soon!

@alilee
Copy link
Author

alilee commented Jan 8, 2025

Very nice. It would be easy to hit the llms in parallel with async. I am going to play with it for a while.. thanks.

@graniet graniet added the enhancement New feature or request label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants