Replies: 1 comment
-
Maybe this can help: https://python.langchain.com/docs/modules/agents/tools/how_to/human_approval |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a bot which I am running serverless. It uses tools to answer queries BUT it needs to stop at every step and ask the user to verify if its next action makes sense.
What I did was see how the
_call()
function oflangchain.agents.agent.AgentExecutor
was implemented. Based on that I used aLLMSingleActionAgent
that outputs(Action, Observation)
tuples which I then store on a list which is then updated and passed again to the agent as history on the next iteration once the user provides feedback (which can be e.g. 1 day later).I now want to expand and generalize more my bot to behave more like an autonomous general task solving agent and follow either the babyagi or autogpt approach. However now I find myself again scratching my head going into the rabbit-hole of the the
_call
implementation oflangchain.experimental.plan_and_execute.PlanAndExecute
from Plan and Execute but this time its one deeper level of nestendess since each Step requires a full AgentExecutor loop.I am starting to "unwrap" the code to see how to best modularize it but its a lot of work to try to follow all the layers and I can't help but think that there has to be a better way to do this or that I might be missing an already implemented solution for this common use case.
So:
_call()
functions of the Executor Chains?Thank you very much in advance!
Beta Was this translation helpful? Give feedback.
All reactions