title | description | icon |
---|---|---|
Quickstart |
How to deploy Onyx on your local machine |
bolt |
- Git
- Docker with compose (docker version >= 1.13.0)
Note: This is just one way to run Onyx. Onyx can also be run on Kubernetes, there are provided Kubernetes manifests and Helm charts in the deployment directory.
1. Clone the [Onyx](https://github.com/onyx-dot-app/onyx) repo:
```bash
git clone https://github.com/onyx-dot-app/onyx.git
```
2. Navigate to **onyx/deployment/docker_compose**
```bash
cd onyx/deployment/docker_compose
```
3. (Optional) [configure Onyx](/configuration_guide)
4. Bring up your docker engine and run:
- To pull images from DockerHub and run Onyx:
```bash
docker compose -f docker-compose.dev.yml -p onyx-stack up -d --pull always --force-recreate
```
- Alternatively, to build the containers from source and start Onyx, run:
```bash
docker compose -f docker-compose.dev.yml -p onyx-stack up -d --build --force-recreate
```
- This may take 15+ minutes depending on your internet speed.
- Additionally, once the images have been pulled / built, the initial startup of the `api_server` may take some time.
If you see `This site can’t be reached` in your browser despite all containers being up and running,
check the `api_server` logs and make sure you see `Application startup complete`.
- If you see `Killed` in the logs, you may need to increase the amount of memory given to Docker.
For recommendations, check the system requirements [here](https://docs.onyx.app/resourcing).
5. Onyx will now be running on http://localhost:3000.
Note: On the initial visit, Onyx will prompt for a GenAI API key.
For example, you can get an OpenAI API key at: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)Onyx relies on Generate AI models to provide parts of its functionality. You can choose any LLM provider from the admin panel or even self-host a local LLM for a truely airgapped deployment.
docker compose -f docker-compose.dev.yml -p onyx-stack down