Routing enqueue and dequeue depth on fabric testbed platform.
We are using server from WASH and UCSD for server1 and server2. Our server at DALL is functioning as a bmv2 programmable switch with 2 Basic NICs. These three sites are connected by physical connections as shown in figure below :
Limiting the rate by :
sudo tc qdisc add dev enp7s0 root netem rate 1Gbit delay 100ms
will limit the traffic, but we can't see the queue build up.
In order to the see the queue build up, limit the rate by :
sudo tc qdisc add dev enp7s0 root handle 1:0 netem delay 1ms
sudo tc qdisc add dev enp7s0 parent 1:1 handle 10: tbf rate 1gbit buffer 160000 limit 300000
Then we started to see the queue build up as shown below:
In our experiment we make a congestion and analyze the enq_qdepth and deq_qdepth while the switch is congested.
After each 0.05 seconds we send the probing packets. Every time when the switch receives the probing packets it resets the sum
value and and packet_count
value on Registers and patches the existing values onto the packet header as shown in figure below.
In our experiments in P4 programmable (Hardware) switch we were exploring on average of 360ns per packet latency.
We tried the same experiment in our bmv2 switch.
As shown below the maximum latency of the packet that we explore is 1600 ns which is quite high, but on average we are exporing 500-600ns latency including two congestions as shown below.
In order to analyze the discrepancies in our graph I decided to plot the maximum and the minimum of the enq_qdepth.
The main question in the graph is : Why we don't explore 64 packets maximum in the second congestion?