Skip to content

nagmat1/Routing_enq_deq_depth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Routing_enq_deq_depth

Routing enqueue and dequeue depth on fabric testbed platform.

Architecture

We are using server from WASH and UCSD for server1 and server2. Our server at DALL is functioning as a bmv2 programmable switch with 2 Basic NICs. These three sites are connected by physical connections as shown in figure below :

Screenshot from 2023-11-02 10-13-26

Limiting the rate to build up the queue :

Limiting the rate by :

sudo tc qdisc add dev enp7s0 root netem rate 1Gbit delay 100ms

will limit the traffic, but we can't see the queue build up.

In order to the see the queue build up, limit the rate by :

sudo tc qdisc add dev enp7s0 root handle 1:0 netem delay 1ms
sudo tc qdisc add dev enp7s0 parent 1:1 handle 10: tbf rate 1gbit buffer 160000 limit 300000

Then we started to see the queue build up as shown below:

Screenshot from 2023-10-23 14-35-54

Average Enq_qdepth and Deq_qdepth for 2 cases with congestion

In our experiment we make a congestion and analyze the enq_qdepth and deq_qdepth while the switch is congested.

After each 0.05 seconds we send the probing packets. Every time when the switch receives the probing packets it resets the sum value and and packet_count value on Registers and patches the existing values onto the packet header as shown in figure below.

temp

Does the bmv2 switch support Line rate ?

In our experiments in P4 programmable (Hardware) switch we were exploring on average of 360ns per packet latency.

We tried the same experiment in our bmv2 switch.

As shown below the maximum latency of the packet that we explore is 1600 ns which is quite high, but on average we are exporing 500-600ns latency including two congestions as shown below.

temp

Min, Max enq depth graph.

In order to analyze the discrepancies in our graph I decided to plot the maximum and the minimum of the enq_qdepth.

temp

The main question in the graph is : Why we don't explore 64 packets maximum in the second congestion?

About

Routing enqueue dequeue depth fabric program

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published