How to make Model answer only from my docs #10036
-
Hi, i'm useing LangChain to ingest a TXT database in to a vector one and feed it to an LLM model. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
🤖 Hi there, Yes, you can configure LangChain to return answers only from the ingested TXT database, rather than using its pre-trained information. This can be achieved by using the Here is the relevant code: QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
) In this way, LangChain ensures that the language model only returns answers that are present in the ingested TXT database, rather than using its pre-trained information. You can find more details in the map_reduce_prompt.py file. As for the process of ingesting a TXT database into a vector one in LangChain, here is a sample code snippet: from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores.epsilla import Epsilla
from pyepsilla import vectordb
# Initialize the Epsilla client
client = vectordb.Client()
vectorstore = Epsilla(
client=client,
embeddings=OpenAIEmbeddings(),
db_path="/tmp/vectorstore",
db_name="langchain_store"
)
# Read the texts from the TXT database
with open('database.txt', 'r') as file:
texts = file.readlines()
# Add the texts to the vector store
vectorstore.add_texts(texts) In this example, the I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
🤖
Hi there,
Yes, you can configure LangChain to return answers only from the ingested TXT database, rather than using its pre-trained information. This can be achieved by using the
QUESTION_PROMPT
andCOMBINE_PROMPT
templates defined in themap_reduce_prompt.py
file.Here is the relevant code: