You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note how the component takes a list of prompts and LLM parameters only, but no variables nor templates, and returns only strings. This is because input rendering and output parsing are delegated to PromptBuilder.
In order to support token streaming, we make this component accept a callback in __init__, and that callback will be called every time a new chunk of the streamed response is received.
The text was updated successfully, but these errors were encountered:
See the proposal: #5540
OpenAI chat models like ChatGPT and GPT4 can be queried through an identical API, therefore we will have a single component for both GPT4 and ChatGPT.
We can either call it
OpenAIGenerator
and make it support also GPT3 models, or call itGPT4Generator
and make an alias for it calledChatGPTGenerator
.Draft API for
GPT4Generator
:Note how the component takes a list of prompts and LLM parameters only, but no variables nor templates, and returns only strings. This is because input rendering and output parsing are delegated to
PromptBuilder
.In order to support token streaming, we make this component accept a callback in
__init__
, and that callback will be called every time a new chunk of the streamed response is received.The text was updated successfully, but these errors were encountered: