How to remove the context and prompts in the answer generated by LLM and langchain? #27666
Unanswered
ztl58553116
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Some of the context is derived from the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
I'm a beginner in langchain, and I'm trying to use langchain to build a chinese chatbot that could comfort consultants. When I created a instance of "ConversationalRetrievalChain" and tried to use it to generate answer, I found that it always contained context and instructions information. I tried many ways but failed to remove them. Did anyone know how to solve the problem? I sincerely appreciate your replies.
System Info
langchain==0.2.0
langchain-community==0.2.0
langchain-core==0.2.0
langchain-text-splitters==0.2.0
langchainplus-sdk==0.0.20
requirements.txt
Beta Was this translation helpful? Give feedback.
All reactions