title | description | icon |
---|---|---|
Configuring Onyx |
How to customize your deployment environment. |
screwdriver-wrench |
All of the global configuration options that are not built into the UI are set via environment variables. This page contains an exhaustive list of all the options.
There are defaults set in the code so changing/settings these values is not required to use Onyx. A few notable settings that are more frequently changed however are the following:
- AUTH_TYPE (default value is
disabled
) - MULTILINGUAL_QUERY_EXPANSION (you can provide a comma separated list of languages for query rephrasing such as
English,French
) - LOG_LEVEL (default is
info
) - WEB_DOMAIN (your full url in production, including the protocol- e.g.
https://www.onyx.app
)
There are several ways to configure environment variables for the containers. For Docker Compose, there are many ways
to pass environment variables to the container, any of the standard approaches will work. However
the preferred approach for Onyx is to use the .env
file. To do this, create a file called .env
at
onyx/deployment/docker_compose/.env
. From there, populate it with the values you want to override:
# Configures basic email/password based login
AUTH_TYPE="basic"
# Rephrasing the query into different languages to improve search recall
MULTILINGUAL_QUERY_EXPANSION="English,Spanish,German"
# Set a cheaper/faster LLM for the flows that are easier (such as translating the query etc.)
FAST_GEN_AI_MODEL_VERSION="gpt-3.5-turbo"
# Setting more verbose logging
LOG_LEVEL="debug"
LOG_ALL_MODEL_INTERACTIONS="true"
For Kubernetes, the deployment yaml files includes an Environment ConfigMap. Simply update the values in the file here.
This is an extensive list of the currently supported environment variables within Onyx. There are several classes of environment variables. All of the global configuration options that are not built into the UI are set via environment variables.
You can set these in your .env
file.
These variables control authentication and user management in Onyx.
Controls the authentication method used by Onyx.
disabled
: No authentication is required.
google_oauth
: Users can log in using their Google accounts.
basic
: Standard username/password authentication.
oidc
: OpenID Connect, available in the enterprise edition.
saml
: Security Assertion Markup Language, available in the enterprise edition.
Defines the duration of a user's session in seconds. Default is 24 hours.
A strong, unique string used for encryption purposes. Keep this value secret.
Comma-separated list of allowed email domains for authentication. Leave empty to allow all domains.
Client ID for Google OAuth authentication, obtained from Google Cloud Console.
Client Secret for Google OAuth authentication, obtained from Google Cloud Console. Keep this value secret.
When set to true
, users must verify their email before accessing Onyx.
Hostname of the SMTP server for sending verification emails. Default is smtp.gmail.com
.
Port used for SMTP communication. Common values are 587
(TLS) or 465
(SSL).
Username for SMTP authentication, often an email address used to send verification emails.
Password for SMTP authentication. Keep this value secret.
Email address used as the sender for verification emails.
Set to true
to enable the forgot password feature. Only enable this if you have configured the above SMTP settings (For email functionality).
These variables configure the generative AI capabilities of Onyx. These are covered in more depth at the generative AI configs.
Specifies the provider of the generative AI model (e.g., openai
, anthropic
, huggingface
).
Defines the version of the generative AI model to use (e.g., gpt-4
for OpenAI).
Specifies a faster (usually smaller) model version for certain tasks.
API key for accessing the generative AI service. Keep this value secret.
Specifies the type of LLM provider being used (e.g., openai
, anthropic
, azure
).
Maximum number of tokens to generate in AI responses.
Timeout for question-answering operations in seconds.
Maximum number of document chunks fed into a single chat session.
Set to true
to disable LLM-based filter extraction from queries.
Set to true
to disable LLM-based filtering of document chunks.
Set to true
to disable LLM-based selection of search method.
Set to true
to disable LLM-based query rephrasing.
Set to true
to disable all generative AI functionality.
Set to true
to disable streaming responses when using LiteLLM.
JSON-formatted string of key-value pairs for additional headers in LiteLLM API requests.
Set to true
to enable the global token budget system.
These variables are used for AWS Bedrock integration.
AWS access key ID for Bedrock access, obtained from AWS IAM.
AWS secret access key for Bedrock access, obtained from AWS IAM. Keep this value secret.
AWS region where Bedrock is deployed (e.g., us-west-2
).
These variables control various aspects of query processing and search behavior.
Controls the recency bias in search results. Higher values increase preference for newer documents.
Balances keyword vs. vector search in hybrid search. Range 0-1 (0 for pure keyword, 1 for pure vector search).
Set to true
to enable query editing for keyword searches.
Set to true
to enable multilingual query expansion.
Custom prompt text to override the default prompt used for question-answering.
Configuration for external services used by Onyx.
Hostname or IP address of the Postgres server. Default is relational_db
.
Hostname or IP address of the Vespa server. Default is index
.
Fully qualified domain name used for the Onyx web interface. (e.g. https://www.onyx.com)
Advanced settings for NLP models. Modify with caution.
Size of batch used when indexing documents. Overrides the default batch size for indexing operations.
Size of batch used when embedding documents during indexing and search operations. Overrides the default batch size for embedding processes.
Name or path of the encoder model used for document encoding.
Dimension of document embeddings, typically matching the chosen encoder model's output dimension.
Set to true
to enable normalization of embeddings.
Text prepended to queries in asymmetric semantic search.
Set to true
to enable reranking in real-time search flow.
Set to true
to enable reranking in asynchronous search flow.
Hostname or IP address of the model server. Default is inference_model_server
.
Port on which the model server is listening.
< Various other configuration options.
Set to true
to opt out of telemetry. Telemetry helps improve Onyx; no sensitive data is collected.
Sets the logging verbosity. Possible values: debug
, info
, warning
, error
, critical
.
Set to true
to enable logging of all prompts sent to the LLM.
Set to true
to enable additional logging of Vespa query performance.
Set to true
to enable logging of endpoint latency information.