Dria Compute Node serves the computation results within Dria Knowledge Network.
Compute nodes can technically do any arbitrary task, from computing the square root of a given number to finding LLM outputs from a given prompt, or validating an LLM's output with respect to knowledge available on the web accessed via tools.
-
Heartbeats: Every few seconds, a heartbeat ping is published into the network, and every compute node responds with a digitally-signed pong message to indicate that they are alive, along with additional information such as which nodes they are running & how many tasks they have so far.
-
Workflows: Each task is given in the form of a workflow. Every workflow defines an agentic behavior for the chosen LLM, all captured in a single JSON file, and can represent things ranging from simple LLM generations to iterative web searching & reasoning.
Refer to node guide to quickly get started and run your own node!
For production images:
- Versioned: With each release, a versioned image is deployed on Docker hub with the version tag
:vX.X.X
. - Latest: The latest production image is always under the
:latest
tag.
For development images:
- Master: On each push to
master
branch, a new image is created with the tagmaster-<commit>-<timestamp>
. - Unstable: The latest development image is always under the
:unstable
tag.
You can see the list of deployed images on Docker Hub.
If you have a feature that you would like to add with respect to its respective issue, or a bug fix, feel free to fork & create a PR!
If you would like to run the node from source (which is really handy during development), you can use our shorthand scripts within the Makefile. You can see the available commands with:
make help
You will need OpenSSL installed as well, see shorthand commands here. While running Ollama elsewhere (if you are using it) or with an OpenAI API key provided, you can run the compute node with:
make run # info-level logs
make debug # debug-level logs
If you have a valid .env
file, you can run the latest Docker image via compose as well:
docker compose up
# Ollama without any GPUs
docker compose --profile=ollama-cpu up
# Ollama for NVIDIA gpus
docker compose --profile=ollama-cuda up
# Ollama for AMD gpus
docker compose --profile=ollama-rocm up
You can the tests as follows:
make test
We also have some benchmarking and profiling scripts, see node performance for more details.
You can view the inline documentation with:
make docs
Lint and format with:
make lint # clippy
make format # rustfmt
This project is licensed under the Apache License 2.0.