inputs
must have less than 1000 tokens.
#1681
Unanswered
rohan-uiuc
asked this question in
Q&A
Replies: 1 comment 5 replies
-
Stuck with the same issue as above. I have tried many other hugging face models, the issue is persisting across models. Looking at the base.py is not providing any clue as to how to modify the length of the document or tokens fed to the Hugging face LLM. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to create a QA bot where I have 2 options to get the answer from OpenAI and FLAN-UL2.
I am creating index by scraping a website and from a pdf. As usually done, I am splitting the document and using load_qa_with_sources_chain to create the chain using the created index.
Earlier I used simple chain() to get the response using the similarity_search which worked perfectly for OpenAI GPT.
However, when I use the same for FLAN-UL2, it throws the following error:
ValueError: Error raised by inference API: Input validation error: inputs must have less than 1000 tokens. Given: 2023
I tried the same with VectorDBQAWithSourcesChain and it results in same.
I understood that
inputs
variable is actually theprompts
that is being passed to the_call_
method inllms.base.py
but that is created internally. How do I change the size of that? Am I missing something or doing something wrong?Here is the link to the source https://github.com/rohan-uiuc/makerlab-bot/tree/compare
Any inputs are appreciated.
Beta Was this translation helpful? Give feedback.
All reactions