Replies: 4 comments 6 replies
-
🤖 Hello, Based on the information provided, it seems that you're interested in understanding how the The Here is the implementation of the def batch(
self,
inputs: List[LanguageModelInput],
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
max_concurrency: Optional[int] = None,
**kwargs: Any,
) -> List[str]:
config = self._get_config_list(config, len(inputs))
if max_concurrency is None:
llm_result = self.generate_prompt(
[self._convert_input(input) for input in inputs],
callbacks=[c.get("callbacks") for c in config],
tags=[c.get("tags") for c in config],
metadata=[c.get("metadata") for c in config],
**kwargs,
)
return [g[0].text for g in llm_result.generations]
else:
batches = [
inputs[i : i + max_concurrency]
for i in range(0, len(inputs), max_concurrency)
]
return [
output
for batch in batches
for output in self.batch(batch, config=config, **kwargs)
] As for the independence of batch calls, each call to the chain is independent of the others. The function simply iterates over the list of inputs and applies the chain to each one, without any apparent interaction between different calls. If there is no memory component set up for the chain, there should be no state shared between calls, making them independent. I hope this helps! If you have any other questions, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
🤖 Hello, You're correct in your understanding that the When you use the Here's a simplified explanation of how the
Here is the relevant code: if max_concurrency is None:
llm_result = self.generate_prompt(
[self._convert_input(input) for input in inputs],
callbacks=[c.get("callbacks") for c in config],
tags=[c.get("tags") for c in config],
metadata=[c.get("metadata") for c in config],
**kwargs,
)
return [g[0].text for g in llm_result.generations]
else:
batches = [
inputs[i : i + max_concurrency]
for i in range(0, len(inputs), max_concurrency)
]
return [
output
for batch in batches
for output in self.batch(batch, config=config, **kwargs)
] This code can be found in the libs/langchain/langchain/llms/base.py file. I hope this helps! If you have any more questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
I'm late to the party but I had the same question and thought I'd post what I found for future searchers. The core concept is the Runnable protocol, which allows most langchain components to execute the various invoke methods in a standardized way. |
Beta Was this translation helpful? Give feedback.
-
Is there any obvious advantage to using batch() rather than invoke() after iterating over each input? Apart from some lines of code being saved.... |
Beta Was this translation helpful? Give feedback.
-
Hi there!
I need to pass different instances of a dataset to a LLM in independent calls. I am currently passing them to the model in a loop, like so:
I am considering changing it to work with batch calls, like so:
When using batch calls, are the calls still independent of each other?
Verbose=True
shows that the chain is entered only once for each batch, which makes sense. I have not set up any memory component for the chain, so I believe the calls should still be independent, but I would like to learn more about howbatch()
works internally and verify this. TY!Beta Was this translation helpful? Give feedback.
All reactions