Skip to content

Latest commit

 

History

History
24 lines (19 loc) · 633 Bytes

llamacpp.md

File metadata and controls

24 lines (19 loc) · 633 Bytes

LlamaCpp

Run the llama.cpp server binary to start the API server. If running on a remote server, be sure to set host to 0.0.0.0:

.\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models\meta\llama\codellama-7b-instruct.Q8_0.gguf

After it's up and running, change ~/.continue/config.json to look like this:

{
  "models": [
    {
      "title": "Llama CPP",
      "provider": "llama.cpp",
      "model": "MODEL_NAME",
      "apiBase": "http://localhost:8080"
    }
  ]
}

View the source