Skip to content

Latest commit

 

History

History
104 lines (77 loc) · 3.85 KB

README.md

File metadata and controls

104 lines (77 loc) · 3.85 KB

Model Configs

This directory provides some examples from current supported provider plugins for the burpference tool.

Important: as Burp Suite cannot read from a filesystem's os environment, you will need to explicitly define API key values in the configuration .json files per-provider (ie, here). To illustrate only, mimic environment variables are set as placeholders.

If you intend to fork or contribute to burpference, ensure that you have excluded the files from git tracking via .gitignore. There's also a pre-commit hook in the repo as an additional safety net. Install pre-commit hooks here.


Ollama GGUF

Example Ollama /chat GGUF model:

In order to serve inference as part of burpference, the model must be running on the API endpoint (your local host), ie: "ollama run hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF:Q4_K_M".

Ensure to follow steps guidance here as a pre-requisite.

{
  "api_type": "ollama",
  "stream": false,
  "host": "http://localhost:11434/api/chat",
  "model": "hf.co/{username}/{repository}", <-- ensure to replace these variables
  "quantization": "Q4_K_M" <-- optional
  "max_input_size": 32000 <-- recommended to adjust based on model loaded and ollama restrictions
}

Ollama Inference

Example Ollama /generate//chat inference model:

In order to serve inference as part of burpference, the model must be running on the API endpoint (your local host), ie: ollama run mistral-small.

{
  "api_type": "ollama",
  "format": "json",
  "stream": false,
  "host": "http://localhost:11434/api/generate", <-- adjust based on Ollama API settings, ie: http://localhost:11434/api/chat
  "model": "mistral-small" <-- insert any models from Ollama that are on your local machine
}

Anthropic Inference

Example Anthropic /messages inference with claude-3-5-sonnet-20241022:

{
  "api_type": "anthropic",
  "headers": {
    "x-api-key": "{$ANTHROPIC_API_KEY}", <-- replace with your API key in the local config file
    "Content-Type": "application/json",
    "anthropic-version": "2023-06-01"
  },
  "max_tokens": 1020, <-- adjust based on your required usage
  "host": "https://api.anthropic.com/v1/messages",
  "model": "claude-3-5-sonnet-20241022" <-- adjust based on your required usage
}

OpenAI Inference

Example OpenAI /completions inference with gpt-4o-mini:

{
  "api_type": "openai",
  "headers": {
    "Authorization": "Bearer {$OPENAI_API_KEY}", <-- replace with your API key in the local config file
    "Content-Type": "application/json"
  },
  "stream": false,
  "host": "https://api.openai.com/v1/chat/completions",
  "model": "gpt-4o-mini", <-- adjust based on your required usage
  "temperature": 0.1 <-- adjust based on your required usage
}

Model System Prompts

By default, the system prompt sent as pretext to the model is defined here, feel free to edit, tune and tweak as you see fit.