-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local model not running - Ollama "HUGGINGFACE_API_KEY not found in .env file" #20
Comments
Hi, Please provide full command you used to run code interpreter. If you have any logs then you can attach them |
(xy) C:\Users\X>interpreter -m local-model -md code -dc (xy) C:\Users\x> this the code form the terminal, im using conda env. |
This issue is fixed in this commit |
PIP has not the latest version, just use this build from repo. |
Describe the bug
A clear and concise description of what the bug is.
Unable to use OLLAMA local model
To Reproduce
Steps to reproduce the behavior:
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\x.conda\envs\xy\Scripts\interpreter.exe_main.py", line 7, in
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\interpreter.py", line 56, in main
interpreter = Interpreter(args)
^^^^^^^^^^^^^^^^^
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 49, in init
self.initialize()
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 85, in initialize
self.initialize_client()
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 127, in initialize_client
raise Exception(f"{api_key_name} not found in .env file.")
Exception: HUGGINGFACE_API_KEY not found in .env file.
Expected behavior
App to start
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Local-model.config
The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
The maximum number of new tokens that the model can generate.
max_tokens = 2048
The start separator for the generated code.
start_sep = ```
The end separator for the generated code.
end_sep = ```
If True, the first line of the generated text will be skipped.
skip_first_line = True
The model used for generating the code.
#HF_MODEL = ollama/qwen2.5-coder
api_base = https://localhost:11434/api/generate
The text was updated successfully, but these errors were encountered: