The tests shown below measure the latency from the appender writing a packet of 64, 128, 1024, or 1500 bytes, and then the data being received by the tailer.
The data includes an embedded timestamp, which is extracted by the tailer and used to determine the end-to-end latency.
Note, the C++ plot uses a linear scale for the latency axis; the Java plot uses a logarithmic scale.
Chronicle Queue Zero ( abbreviated to Queue Zero ) is a high performance queue which trades some of Chronicle Queue’s functionality for significantly higher performance with applicability across a wide range of applications. Plots for equivalent tests using Queue Zero are shown below and demonstrate the significant performance benefit of Queue Zero vs Queue (around x10), and also the better scaling at high percentiles.
Further details of Queue Zero are available by contacting [email protected]
To make the comparison between Chronicle Queue ( abbreviated to Queue ) and Chronicle Queue Zero ( abbreviated to Queue Zero ) performance even clearer, the below plots show the end-to-end latencies for sending a 1024-byte message using Queue and Queue Zero, for C++ and Java respectively. These plots again show both the much reduced latency using Queue Zero, as well as the reduction of outliers at higher percentiles.
-
These results where generated using
net.openhft.chronicle.queue.bench.QueueMultiThreadedJLBHBenchmark
for Java, andc++/test/queue/bench/QueueMultiThreadedJLBHBenchmark.cpp
for C++. -
The platform used to generate the results is a 24-core E5-2650 v4 running at 2.2GHz (turbo to 2.9GHz).
-
The tests are run across two threads (one for the appender, one for the tailer) with each thread pinned to a separate core.
-
The tests show the write-to-read latency, from the point just before the data is written to queue zero, to the point the data is available to read from the tailer.