I am receiving multiple outputs to the same input whenever making calls to Whatsapp Chatbot using FastAPI and Langchain #9922
Replies: 1 comment
-
🤖 Hello, Based on the code and the issue you've described, it seems like your chatbot is processing multiple questions as a single input, which could be causing the repeated responses. The LangChain framework is designed to handle one action per JSON blob. If multiple questions are asked simultaneously, the chatbot might be processing only the last question and ignoring the rest. To resolve this issue, you might need to modify your chatbot to handle multiple questions separately, ensuring each question is processed as a separate action. If this is not feasible, you might need to instruct users to ask one question at a time. Additionally, if your application is using the streaming feature of LangChain ( Regarding the handling of multiple inputs in LangChain, the AgentExecutor handles multiple inputs through the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
I am running a Chatbot that uses OpenAI LLM and Langchain, I have configured to accept and send requests to Meta Webhooks (as it is on Whatsapp) using FastAPI. However, whenever I ask it two or three questions simultaneously it gives me repeated answers for the last question it answers by running the AgentExecutor Chain multiple times.
`
from services import index
from services.prompt import (
PREFIX,
SUFFIX
)
from langchain.memory import ConversationBufferMemory
from langchain.agents import ConversationalChatAgent, AgentType, Tool, AgentExecutor
from langchain.chat_models import ChatOpenAI, PromptLayerChatOpenAI
from langchain.chains import LLMChain, RetrievalQAWithSourcesChain
tools = [
Tool(
name="Mobile Security Knowledge Base Search",
func=index.query_index,
description="useful for when you need to search for mobile security related information. the input to this should be a single term or phrase."
)
]
def init_agent() -> ConversationalChatAgent:
llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.1, pl_tags=["langchain"], return_pl_id=True)
prompt = ConversationalChatAgent.create_prompt(
tools=tools,
system_message=PREFIX,
human_message=SUFFIX
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = ConversationalChatAgent(llm_chain=llm_chain)
return agent
def init_chain(agent: ConversationalChatAgent,memory: ConversationBufferMemory) -> AgentExecutor:
@router.post("/webhook")
async def webhook_events(request: Request):
payload = await request.json()
if "statuses" in payload["entry"][0]["changes"][0]["value"]:
print("Statuses request dropped")
return 0
`
Beta Was this translation helpful? Give feedback.
All reactions