This is the official webpage for Quaint: (QUality-of-service-Aware Intelligent Network digital Twin).
Quaint is a highly efficient parallel and distributed simulator for QoS-aware DiffServ networks. Quaint is built on ROSS, and utilises the optimistic parallel discrete event simulation (PDES) technology to achieve fast execution.
A paper for the design and performance characterisation of Quaint has been submitted to Euro-Par 2024.
This page contains all the source code, input data and results of the paper's experiments. The accuracy and efficiency of Quaint have been evaluated against OMNeT++ with the INET model library. In particular:
- The experiments related to Quaint can be found in the repo ROSS-Network-Model
- The experiments related to OMNeT++/INET can be found in the repo omnet-bench
- The scripts for data cleaning, visualisation, etc., are in the repo experiment-utils. Especially, the plotting scripts for all figures can be found in src/plots.ipynb of the
experiments-metis
branch.
The network topology used in the evaluation is shown in the figure below. There are 5,237 nodes (routers/switches) and 6067 links, with 5149 access nodes, 70 mixed nodes, and 18 kernel nodes. The black dots are “access” routers. The blue dots are “mixed” routers. The red dots are “kernel” routers. An access router usually has 1 to 3 ports with 25Gbps bandwidth. A mixed or access switch can have over 10 ports with 10-100Gbps bandwidth.
Propagation delay: 4ms
One bit of the packet consumes one token.
Each packet is 1400B.
srTCM meter:
CBS = 1400*50*8;
EBS = 1400*500*8;
CIR = egress_port_bandwidth/number_of_priority_levels
Shaper:
capacity of the token bucket: 2*1400*8 tokens
token generation rate: same as the egress port bandwidth
Queue size:
priority 0: 5MB (~3571 packets)
priority 1: 20MB (~14285 packets)
priority 2: 20MB (~14285 packets)
RED dropper:
YELLOW_DROPPER_MAXTH :capacity_of_its_attached_queue_in_bytes/1400*0.6
GREEN_DROPPER_MAXTH :capacity_of_its_attached_queue_in_bytes/1400*0.9
All experiments in this paper were run on a cluster of four DELL PowerEdge T64 servers. Each server has 40 CPU cores (Gold 6230 2.1G*2), 256G RAM, and 1.92TB SSD. All servers are connected via 56Gbps InfiniBand FDR to a Mellanox SX6036 switch. Each server is installed with Ubuntu 20.04.6 LTS, OpenMPI 4.1.6 and UCX 1.15.0.
No. processes | No. servers | No. processes per server |
---|---|---|
1 | 1 | 1 |
2 | 1 | 2 |
4 | 1 | 4 |
8 | 1 | 8 |
16 | 1 | 16 |
20 | 1 | 20 |
29 | 2 | 14 and 15 |
32 | 2 | 16 |
56 | 4 | 14 |
57 | 4 | 14, 14, 14 and 15 |
60 | 4 | 15 |
This research is supported by ZTE Communication Technology Service Co., Ltd., China.