Does langchain support Input Parsing? #23750
Unanswered
blacksmithop
asked this question in
Q&A
Replies: 1 comment
-
Yes, LangChain supports input parsing for Runnables, Chains, or Agents, similar to how it supports custom output parsers. This is evident from the usage of the inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
final_output: Any = None
if self.stream_runnable:
async for chunk in self.runnable.astream(
inputs, config={"callbacks": callbacks}
):
if final_output is None:
final_output = chunk
else:
final_output += chunk
else:
final_output = await self.runnable.ainvoke(
inputs, config={"callbacks": callbacks}
)
return final_output This demonstrates that LangChain can handle and parse inputs for its components, similar to how it handles custom output parsing [1]. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
I was wondering if there any similar support for the input to the LLM, Chain or Agent.
System Info
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
Windows
Python 3.11.5
Beta Was this translation helpful? Give feedback.
All reactions