Replies: 3 comments 3 replies
-
Hey @nstebbins! I'm here to help you with your coding questions and support you as you work through any issues. Let me know how I can assist you! To return the generated question with from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain_core.documents import Document
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.retrievers import RetrieverOutputLike
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import Runnable, RunnableParallel, RunnablePassthrough
from langchain_core.vectorstores import VectorStoreRetriever
def build_history_aware_retriever(
retriever: VectorStoreRetriever,
llm: Runnable,
contextualize_question_prompt: ChatPromptTemplate,
) -> RetrieverOutputLike:
return create_history_aware_retriever(
llm=llm,
retriever=retriever,
prompt=contextualize_question_prompt,
)
def build_rag_chain_with_history(
retriever: VectorStoreRetriever,
llm: Runnable,
contextualize_question_prompt_retriever: ChatPromptTemplate,
contextualize_question_prompt_chain: ChatPromptTemplate,
) -> Runnable:
history_aware_retriever = build_history_aware_retriever(
retriever=retriever,
llm=llm,
contextualize_question_prompt=contextualize_question_prompt_retriever,
)
question_answer_chain = create_stuff_documents_chain(
llm, contextualize_question_prompt_chain
)
return create_retrieval_chain(
retriever=history_aware_retriever,
combine_docs_chain=question_answer_chain,
return_generated_question=True # Add this line to return the generated question
)
# Example usage
inputs = {
"input": "Your user input here",
"chat_history": [] # Collect chat history here (a sequence of messages)
}
chain = build_rag_chain_with_history(
retriever=your_retriever_instance,
llm=your_llm_instance,
contextualize_question_prompt_retriever=your_contextualize_question_prompt_retriever,
contextualize_question_prompt_chain=your_contextualize_question_prompt_chain
)
result = chain.invoke(inputs)
# Extract the generated question
generated_question = result.get("generated_question")
print(generated_question) In this example, the key addition is setting |
Beta Was this translation helpful? Give feedback.
-
Hi @dosu that didn't help. I got the following exception when I added the line you mentioned.
|
Beta Was this translation helpful? Give feedback.
-
Hi @nstebbins, did you solve this problem? |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
Hey guys,
I followed this tutorial here basically to a tee for creating a chain that performs RAG using a history-aware retriever.
I would however like to display the generated question that was created out of the chat history + original question used to retrieve the K nearest documents.
With the old Langchain paradigm you used to be able to accomplish this by basically setting
return_generated_question
as part of theBaseConversationalRetrievalChain
class.But now that's not an option as
ConversationalRetrievalChain
is deprecated in favor of thecreate_retrieval_chain
approach as outlined in the docs here.System Info
Beta Was this translation helpful? Give feedback.
All reactions