Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not load Llama model from path #2

Open
xmagcx opened this issue Jul 7, 2023 · 3 comments
Open

Could not load Llama model from path #2

xmagcx opened this issue Jul 7, 2023 · 3 comments

Comments

@xmagcx
Copy link

xmagcx commented Jul 7, 2023

52, in _run_script
exec(code, module.dict)
File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\app.py", line 42, in
llm = LlamaCpp(model_path="./models/llama-7b.ggmlv3.q4_0.bin")
File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\venv\lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: ./models/llama-7b.ggmlv3.q4_0.bin. Received error Model path does not exist: ./models/llama-7b.ggmlv3.q4_0.bin (type=value_error)

What version of python are you using?

@afaqueumer
Copy link
Owner

I guess you need to edit the path or place the model in the same directory. This is a path error it was hard coded.

@unkrejativ
Copy link

Hey @xmagcx , could you solve the problem?

@six-finger
Copy link

Replace pipenv by python -m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants