Use chromadb local vectorstore in RAG chain. TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text' #23637
Unanswered
GraceChen130
asked this question in
Q&A
Replies: 1 comment
-
Hey @GraceChen130! I'm here to help you with your technical issues and questions. Let's work together to solve this bug! The issue you're encountering is due to the incorrect instantiation of the Here's the corrected code snippet: from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from operator import itemgetter
# Correct instantiation of OpenAIEmbeddings
vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings(), persist_directory=vectorstore_dir)
vectorstore2 = Chroma(persist_directory=vectorstore_dir, embedding_function=OpenAIEmbeddings())
r1 = vectorstore.as_retriever()
r2 = vectorstore2.as_retriever()
def rag_chain_query_fusion(self, question, retriever):
retrieval_chain_rag_fusion = self.generate_queries | retriever.map() | self.reciprocal_rank_fusion
context_template = """Answer the following question based on this context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(context_template)
final_rag_chain = (
{"context": retrieval_chain_rag_fusion, "question": itemgetter("question")}
| prompt
| llm
| StrOutputParser()
)
output = final_rag_chain.invoke({"question": question})
print(output) By ensuring that |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
I have a RAG for llm. My code works if I generate the vectorstore from Chroma in the script. But I don't want to do it every time I update a new question, so I tried to save it once and load it: vectorstore2 = Chroma(persist_directory=vectorstore_dir, embedding_function=OpenAIEmbeddings)
However, I found the vectorstore2 is not exactly the same as vectorstore.
vectorstore.embeddings is {OpenAIEmbeddings} object while vectorstore2.embeddings is {ModelMetaclass}.
And after I run the .as_retriever().
r1 tags=['Chroma','OpenAIembeddings'],vectorstore = ....
r2 tags = ['Chroma','ModelMetaclass'],vectorstore = ....
Then when I call the RAG chain, r2 gives me a typeError bug.
TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text'
The full error messages are here:
Traceback (most recent call last):
File "F:\RAG_practice_101\8_query_fusion_classify_zhipuai-debug.py", line 157, in
query_processor.routed_RAG(input_2)
File "F:\RAG_practice_101\8_query_fusion_classify_zhipuai-debug.py", line 123, in routed_RAG
self.rag_chain_query_fusion(input['query'])
File "F:\RAG_practice_101\8_query_fusion_classify_zhipuai-debug.py", line 108, in rag_chain_query_fusion
output = final_rag_chain.invoke({"question": question})
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 2493, in invoke
input = step.invoke(input, config, **kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 3140, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 3140, in
output = {key: future.result() for key, future in zip(steps, futures)}
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 458, in result
return self.__get_result()
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 2495, in invoke
input = step.invoke(input, config)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 4275, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 1596, in _call_with_config
context.run(
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 4270, in _invoke
return self.bound.batch(inputs, configs, **kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 641, in batch
return cast(List[Output], list(executor.map(invoke, inputs, configs)))
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 451, in result
return self.__get_result()
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\Grace\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\config.py", line 499, in _wrapped_fn
return contexts.pop().run(fn, *args)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\runnables\base.py", line 634, in invoke
return self.invoke(input, config, **kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\retrievers.py", line 221, in invoke
raise e
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\retrievers.py", line 214, in invoke
result = self._get_relevant_documents(
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_core\vectorstores.py", line 797, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 349, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "F:\RAG_practice_101.venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 438, in similarity_search_with_score
query_embedding = self._embedding_function.embed_query(query)
TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text'
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions