Skip to content

Latest commit

 

History

History
23 lines (16 loc) · 891 Bytes

README.md

File metadata and controls

23 lines (16 loc) · 891 Bytes

References

Notes

You can spawn Ollama first and then download the respective LLM models via docker exec. Alternatively, spawn the whole stack directly and download LLM models within Open WebUI using a browser.

# spawn ollama and ui
docker compose up -d

# (optional) download an llm model via docker exec
docker exec ollama ollama run llama3:8b

Afterwards, we can browse Open WebUI on http://127.0.0.1:8080 and register our first user account. You may want to disable open user registration later on by uncommenting the env ENABLE_SIGNUP variable and restarting the Open WebUI container.

Tip

You likely want to pass a GPU into the Ollama container. Please read this.