-
Notifications
You must be signed in to change notification settings - Fork 12
Assuring Absolute QoS Guarantees for Heterogeneous Services in RINA Networks with ΔQ
- primary contact Author1, institution,
- Author2, institution
- Author3, institution
- link to the paper
- link to the RINASim snapshot used in article
In this tutorial, we show how to configure and use basic ΔQ scheduling policies to provide differentiable treatment to services given their QoS. In addition, a first approach to the interaction between ΔQ policies and congestion control is shown, allowing for a reasonable overbooking of resources while maintaining QoS requirements (with the dynamic data-rate reduction of flows given based on their requirements).
The scenario used in this tutorial can be found under the folder “/examples/Tutorials/DeltaQ_Scheduling”, and is composed by the following files:
-
net.ned: Network description.
-
omnet.ini: Network and overall configuration.
-
QTA.xml: Configuration of the QTAMux used for ΔQ policies.
-
shimqoscube.xml: QoSCubes definition for shim-DIFs.
-
{cong/free}_qoscube.xml: QoSCubes definition for upper DIFs in congestion controlled and free scenarios.
-
connection{shim/set3/set9}.xml: Definition of preconfigured flows.
-
qosreq.xml: QoS requirements of preconfigured flows.
-
data{0/1x3/1x9/10x3}.xml: Data flows definition for the different configurations.
-
directory.xml: Configuration of IPCP locations
The network described in “net.ned” is a 6 nodes network describing containing the sub-set of datacenter nodes as seen in figure X. In this network, the main flows to consider are those departing from node A towards nodes B and C, being those full ToR-2-ToR flows. In addition, emulating the bandwidth usage of other flows that could collide with those, multiple flows between nodes in that path are allocated.
While in all scenarios, the ΔQ policies inform of congestion in the form of ECN marking, QoSs in the Free* scenarios are configured to ignore them, resulting in high losses for low cherished flows under periods of high load. For the Cong* scenarios, instead, QoSs are configured to reduce the data-rate of the aggregated flows according to the arrival of ECN marked PDUs, resulting in low losses, even in periods of high load of flows with high priority. In terms of flows and QoS. In this scenario we considered a 3x3 Cherish/Urgency matrix. For each position, we define the QoS identifier as A*, B* and C* from more urgent to less, and *1, *2 and *3 from more cherished to less. Given the urgency of flows, we considered 3 different of aplications:
-
QoSs A*: Realtime voice traffic. ON/OFF traffic with small PDUs and without retransmission.
-
QoSs B*: Video on demand and web browsing. ON/OFF traffic with MTU sized PDUs and retransmission of losses.
-
QoSs C*: Filetransfer. Constant traffic with MTU sized PDUs and retransmission of losses.
With those QoSs, we considered 3 configurations for each, considering flows of 1 or 10 Mbps and using the full Cherish Urgency matrix or only the triad A2, B1 and C3. In total, for this tutorial we consider 6 different configurations for the scenario:
-
Without congestion control:
-
Free1Mbps3QoS
-
Free1Mbps9QoS
-
Free10Mbps3QoS
-
With congestion control:
-
Cong1Mbps3QoS
-
Cong1Mbps9QoS
-
Cong10Mbps3QoS
###Net.ned The configured network is a rather simple one, with 6 nodes partially connected using DatarateChannels of 200Mbps and 1Gbps with low delay. Even so, small difference with respect to other examples can be seen in the use of “Inf_Router” with injector “VDT” as nodes and the addition “VDT_Listener”. Those two modules are the base for the data injection into the upper DIF and harvest of statistics of that traffic respectively. More information on those next.
###Omnet.ini Before all, for this scenario, PDUtracing, and all kinds of statistic recordings outside the one provided by our modules have been stopped and its recommended to left them like that given the duration of the simulations and large number of PDU messages generated.
Now let’s explain the basic configuration of the network. For node addressing, we used a simple A to F naming of nodes. From those, each shim-DIF (those at IipcProcess0[*]) take receive its name as the concatenation of the addresses of both extremes. For each of those shims, a basic flow (uncontrolled) is allocated at t=0.
In the middle layer, we find the DIF “Fabric”, containing all nodes. There is where the ΔQ policies will be located. In there, we will use the QoSs of either the "free_qoscube.xml" or "cong_qoscube.xml" depending on the scenarios and the aggregated flows configured in either "connectionset3.xml" or "connectionset9.xml" will be pre-allocated after t=100 and before t=200.
In the top layer, we find the DIF “Net”, where data will be injected and then forwarded using the flows at Fabric. In this layer, we configure the same QoS as for its lower one, but no flow is allocated as we are going to directly inject its PDUs into the RMT. Instead, data will be injected after t=200.
####Routing and forwarding
In order to rely correctly PDUs between the different DIF levels, routing and forwarding policies are configured as follows:
-
At shims, not needed, “on wire”.
-
At Fabric DIF we use the simple forwarding policy generator “SimpleGenerator”, with a link state routing algorithm, “SimpleLS” and using the “MiniTable” forwarding policy (exact match + QoS)
-
At Net DIF we don’t use routing (“DummyRouting” policy set) and use the “OFStaticGenerator” forwarding policy generator with the forwarding policy “SimpleTable” to simply have direct access to the N-1 flows stablished through the Fabric DIF.
####Queues and scheduling
The next step if the configuration of the scheduling policies and queue related stuff. First, the default queue thresholds are configured as an arbitrary large number as the ΔQ policies don’t use the MaxQueue hook, so we avoid then any possibility for it to be executed. Now, we are interested in the configuration of both shim-DIFs and the Fabric DIF (Net DIF uses the default best-effort configuration as there we have only one queue per RMT Port).
- Shim-DIFs configuration
Shim-DIFs in this scenario mimic the real operation of real shim-DIFs, with minimal policies and small buffers that blocks itself when full. We configure one queue per flow requested (“QueuePerFlow” and “IDPerFlow”) with the pair of monitor and scheduling policies “IterativeStopMonitor” and “IterativeScheduling”.
In this case, as we only have one working flow per shim-DIF, what this policy does is to signalize “full” to upper flows after having more than “stopAt” PDUs in queue and “not full” when going under a second threshold “restartAt”, in this case configured at the minimum of 1 and 0 respectively.
- Fabric configuration
In the Fabric DIF is where ΔQ policies are configured. As ΔQ policies are configured per queue, we have multiple options available. Here we considered the configuration of ΔQ per QoS, therefore allocating one queue for each QoS in use (“QueuePerNQoS” and IDPerNQoS”).
The scheduling policy for ΔQ is the simple “QTASch” policy, that basically works querying the monitor policy for the next queue to serve. Then, in the “QTAMonitor” monitor policy, we have all the logic of ΔQ within a configurable module. For this module we require to configure “shaperData” and “muxData”, with the configuration of queue shapers and the CU multiplexer respectively. We will examine later the configuration of those.
####Data injection Finally, whit the network configured, we are going to configure the data injection and recollection of statistics. First of all, as stated when configuring the “Net.ned” file, in this scenario data is injected directly into the RMTs of the Net DIF IPCPs. This means that there are no upper applications nor EFCP instances to retrieve them. Instead, we use the “Inj” module to generate large amount of data and the “Inj_Comparator” to retrieve it at end-point.
The first step will be to configure the duration of the simulation. First, the starting point of data injection is configured in “Inj.ini”, in the same way, the “stop” moment is configured in “Inj.fin”. In this case, it has to be noted that “Inj.fin” only sets the moment at which flows will stop requesting more data, so new data can be still created to complete old requests after that moment. In order to really stop the simulation at a given moment, we should also configure “sim-time-limit”.
Next, the configuration of the injected flows. While most of it is done via the xmls configured as “Ing.data”, explained latter, there are also some parameters configurable in the ini file. First, two parameters shared between all flows, the length of the headers that lower DIFs will add as “headers_size” (in this case 22 bytes) and the value of the ACK timer, “ackT” (here left as its default value 0.1s).
Then the different flow generators can be configured. In this case, we set the average duration of ON and OFF periods of voice flows, “V_ON_Duration_AVG” and “V_OFF_Duration_AVG” respectively at 1/3s and 2/3s, with PDUs from 100 to 400 bytes. Then we configured also the average data-rate in Kbps in each configuration (“V_AVG_FlowRate” for voice (A*), “D_AVG_FlowRate” for video (B*) and “T_AVG_FlowRate” for data (C*)) and the data-rate of video flows during requests (“D_ON_FlowRate”).
Finally, in order to capture statistics. By default, the VDT_Listener saves its results into “stats/{CONFIG_NAME}{RUN}.results”. In addition, we may cout them if the “printAtEnd” parameter is set. Also, if the parameter “recordTrace” is set, it will generate a trace following all PDUs generated and received by the different flows, generating the binary file “stats/{CONFIG_NAME}{RUN}.trace” and the index of flows “stats/{CONFIG_NAME}_{RUN}.traceinfo”.
* As traces can become pretty big fast, it is recommended to have them turned off. Also, the usage of them is not considered in this tutorial, not explained here (as a note, those are sequences “trace_t” strucs, as described in “src/Addons/DataInjectors/FlowsSimulation/Implementations/VDT/VDT_Listener.h”
In addition to end-2-end statics collected at the VDT_Listener, we can also configure the QTAMonitor to extract some information on incoming and outgoing data. For this, we first have to set the parameter “recordStats” at true and optionally give a nodeName to the IPCP (otherwise the module path is used). Then, there are 4 types of data that can be recorded per port setting depending on the parameters turned on:
-
pdu_IO: Number of PDUs arriving at an out port, dropped and served.
-
data_IO: Amount of data arriving at an out port, dropped and served.
-
pdu_IOi: Number of PDUs arriving at an in port.
-
data_IOi: Amount of data arriving at an in port.
After deciding which data record, we have to configure also the interval between recorded frames (“record_interval”, by default 0.1s) and the starting and end of the recording (“first_interval” and “last_interval”). Finally, if we set the parameter “saveStats” as true, we will be generating “stats” and “in.stats” files for the different ports.
###XML files
####QTA.xml
####data*.xml
- explanation of omnetpp.ini content
- explanation of config.xml content
- how to run the scenario in order to reproduce same results as in the paper
- result analysis including how to interpret them
- fingerprint check