Experimentation around LLM and MicroProfile
Running Ollama with the llama3.1 model:
CONTAINER_ENGINE=$(command -v podman || command -v docker)
$CONTAINER_ENGINE run -d --rm --name ollama --replace --pull=always -p 11434:11434 -v ollama:/root/.ollama --stop-signal=SIGKILL docker.io/ollama/ollama
$CONTAINER_ENGINE exec -it ollama ollama run llama3.1
If you want to contribute, please have a look at CONTRIBUTING.md.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.