This repo provides an implementation of Bullshark. The codebase has been designed to be small, efficient, and easy to benchmark and modify. It has not been designed to run in production but uses real cryptography (dalek), networking (tokio), and storage (rocksdb).
The core protocols are written in Rust, but all benchmarking scripts are written in Python and run with Fabric. To deploy and benchmark a testbed of 4 nodes on your local machine, clone the repo and install the python dependencies:
$ git clone https://github.com/asonnino/narwhal.git
$ cd narwhal/benchmark
$ pip install -r requirements.txt
You also need to install Clang (required by rocksdb) and tmux (which runs all nodes and clients in the background). Finally, run a local benchmark using fabric:
$ fab local
This command may take a long time the first time you run it (compiling rust code in release
mode may be slow) and you can customize a number of benchmark parameters in fabfile.py
. When the benchmark terminates, it displays a summary of the execution similarly to the one below.
-----------------------------------------
SUMMARY:
-----------------------------------------
+ CONFIG:
Faults: 0 node(s)
Committee size: 4 node(s)
Worker(s) per node: 1 worker(s)
Collocate primary and workers: True
Input rate: 50,000 tx/s
Transaction size: 512 B
Execution time: 19 s
Header size: 1,000 B
Max header delay: 1_000 ms
GC depth: 50 round(s)
Sync retry delay: 10,000 ms
Sync retry nodes: 3 node(s)
batch size: 500,000 B
Max batch delay: 100 ms
+ RESULTS:
Consensus TPS: 46,478 tx/s
Consensus BPS: 23,796,531 B/s
Consensus latency: 464 ms
End-to-end TPS: 46,149 tx/s
End-to-end BPS: 23,628,541 B/s
End-to-end latency: 557 ms
-----------------------------------------
The next step is to read the paper Bullshark. It is then recommended to have a look at the README files of the worker and primary crates. An additional resource to better understand the Bullshark consensus protocol is the paper Narwhal and Tusk describing the main systems aspects behind this protocol.
The README file of the benchmark folder explains how to benchmark the codebase and read benchmarks' results. It also provides a step-by-step tutorial to run benchmarks on Amazon Web Services (AWS) accross multiple data centers (WAN).
This software is licensed as Apache 2.0.