Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: LLM support in Haystack 2.0 #5540

Merged
merged 9 commits into from
Aug 28, 2023
Merged

proposal: LLM support in Haystack 2.0 #5540

merged 9 commits into from
Aug 28, 2023

Conversation

ZanSara
Copy link
Contributor

@ZanSara ZanSara commented Aug 10, 2023

Related Issues

@ZanSara ZanSara requested review from a team as code owners August 10, 2023 13:05
@ZanSara ZanSara requested review from anakin87 and removed request for a team August 10, 2023 13:05
@mathislucka
Copy link
Member

Thanks @ZanSara, this is huge!

I like that the new LLM capabilities should provide a clean abstraction.

Some comments added to make sure that the new LLM integration provides the capabilities that are needed in practice.

@sjrl sjrl self-requested a review August 11, 2023 08:29

Note how the component takes a list of prompts and LLM parameters only, but no variables nor templates, and returns only strings. This is because input rendering and output parsing are delegated to separate components, which description follows.

Note: whether LLM components accept multiple prompts or a single one depends only on whether we want the LLM to support batching of prompts. Therefore it's an implementation decision that will be evaluated once we know the internals of the component. We strive to keep the interfaces as similar as possible to ease switching the various LLMs, but we won't force identical interfaces over them where it doesn't make sense with respect to their internal implementation.
Copy link
Contributor Author

@ZanSara ZanSara Aug 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anakin87
Copy link
Member

Just a comment to remind us of this...

Both for completion models and chat completion models, somewhere the prompt should be adapted from simple text to text containing special tokens (e.g., [INST] for llama2).

Reference: EasyLLM

@ZanSara ZanSara merged commit 4dda25d into main Aug 28, 2023
@ZanSara ZanSara deleted the llm-support-proposal branch August 28, 2023 08:33
@ZanSara ZanSara added the 2.x Related to Haystack v2.0 label Sep 1, 2023
This was referenced Sep 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.x Related to Haystack v2.0 proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LLM support (2.x)
6 participants