Replies: 5 comments 12 replies
-
🤖 Hi @drew-wks! I'm Dosu, an AI bot here to assist with your LangChain inquiries. I'm here to help you solve bugs, answer your questions, and guide you in becoming a contributor while we wait for a human maintainer. Let's get started! Based on the issues and discussions I found in the LangChain repository, it seems like you can use the from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
# Create a document search object with source metadata
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])
# Create a chain with the document search object and specify that source documents should be returned
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True)
# Use the chain to answer a question and return the source documents
result = chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
# The result will include the answer and the source documents
print(result) In this example, the You can also use the Regarding the I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
I ran into the same thing and this works well for me:
It's called like this:
The way that helped me understand how to do it was this:
This is what happens in my code example:
I'm sure this can be done in a more concise way, but this worked for me and I can understand it :) |
Beta Was this translation helpful? Give feedback.
-
From the documentation
this approach can return the source documents:
|
Beta Was this translation helpful? Give feedback.
-
why langchain has to do this ? |
Beta Was this translation helpful? Give feedback.
-
Damn. I will continue to use the legacy chain for this one. Seems overcomplicated with LCEL lol. |
Beta Was this translation helpful? Give feedback.
-
How do I get source documents to pass through the LECL pipeline? This was a argument in the LLMChain, but now I'm lost. Chat GPT and I have spent about 3 hours with Harrison's example, but honestly these do little for instruction. It crams a bunch of other concepts into each example, so I can't follow. Also there's the whole model vs llm, promptTemplate versus ChatPromptTemplate, Runnable this and iterable that. It's soooo confusing.
Can anyone suggest how to adapt this code to pull the sources through? If it's easier to do it with PromptTemplate, that's fine.
and the generic_prompt_template.json file looks like this:
Beta Was this translation helpful? Give feedback.
All reactions