Replies: 1 comment 1 reply
-
🤖 Hi Luca, The implementation of the RAG in LangChain adds the context from the vector db in the system prompt instead of the user prompt because the system prompt is designed to be dynamic and adaptable to the context of the current question. In the code, the document_prompt = PromptTemplate(
input_variables=["page_content"], template="Context:\n{page_content}"
)
combine_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_variable_name="context",
document_prompt=document_prompt,
callbacks=callbacks,
) This design allows the system to dynamically generate the prompt based on the context retrieved from the vector database for each question. If the context was added to the user prompt, it would be static and wouldn't change based on the context of the current question. This would limit the ability of the system to provide accurate and context-aware responses. The I hope this clarifies your question. If you have any more questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Hi all, I'm wondering why the implementation of the RAG in langchain add the context from the vector db in the system prompt instead of the user prompt?
I'm trying to understand. Since at each question the system prompt is re-initialised, i wonder what is the advantage of such approach. AFAIK the system prompt is supposed to be repeated with some short instructions that are common to all the prompts that will be provided, isn't it?
Thank you in advance
Luca
Beta Was this translation helpful? Give feedback.
All reactions