In ICN terminology, a face is a generalization of the network interface that can represent a physical network interface, an overlay inter-node channel, or an intra-node inter-process communication channel to an application. NDN-DPDK supports the latter two categories, where each face represents an adjacency that communicates with one peer entity. This page explains how to create faces in NDN-DPDK.
Face creation parameters are described with locator, a JSON document that conforms to the JSON schema locator.schema.json
(installed in /usr/local/share/ndn-dpdk
and available online).
A locator contains the transport protocol, local and remote endpoint addresses, and other related parameters.
After activating NDN-DPDK service, each role offers a different API that accepts a locator for face creation:
- forwarder:
ndndpdk-ctrl create-face
command orcreateFace
mutation. - traffic generator:
ndndpdk-ctrl start-trafficgen
command orstartTrafficGen
mutation. - file server:
ndndpdk-ctrl activate-fileserver
command oractivate
mutation.
In any role, you can retrieve a list of faces with ndndpdk-ctrl list-face
command or programmatically via GraphQL faces
query.
The response contains the locator of each existing face.
An Ethernet-based face communicates with a remote node on an Ethernet adapter using a DPDK networking driver. It supports Ethernet (with optional VLAN header), UDP, VXLAN, GTP-U protocols. Its implementation is in package ethface.
There are two steps in creating an Ethernet-based face:
- Create an Ethernet port on the desired Ethernet adapter.
- Create an Ethernet-based face on the Ethernet port.
Each Ethernet adapter can have multiple Ethernet-based faces. The Ethernet port organizes those faces on the same adapter. During port creation, sufficient hardware resources are reserved to accommodate anticipated faces on the adapter, and the adapter becomes ready for face creation.
There are three kinds of drivers for Ethernet port creation. The following table gives a basic comparison:
driver kind | speed | supported hardware | Ethernet | VLAN | UDP | VXLAN | GTP-U | main limitation |
---|---|---|---|---|---|---|---|---|
PCI | fastest | some | yes | yes | yes | yes | yes | exclusive NIC control |
XDP | fast | all | yes | yes | yes | yes | yes | MTU≤3300 |
AF_PACKET | slow | all | yes | no | no | no | yes | slow |
The most suitable port creation command is hardware dependent, and some trial-and-error may be necessary. Due to limitations in DPDK drivers, a failed port creation command may cause DPDK to enter an inconsistent state. Therefore, before trying to a different port creation command, it is recommended to restart the NDN-DPDK service and redo the activation step.
DPDK offers user space drivers for a range of network interface cards. If you have a supported NIC, you should create the Ethernet port with PCI driver, which enables the best performance.
Generally, to create an Ethernet port with PCI driver, you should:
- Determine the PCI address of the Ethernet adapter.
- Bind the PCI device to the proper kernel driver as expected by the DPDK driver.
- Run
ndndpdk-ctrl create-eth-port
command with--pci
flag.
Some Ethernet adapters support rte_flow API that allows for a hardware-accelerated receive path called RxFlow.
See package ethport "Receive Path" section for detailed explanation.
You may enable this feature with --rx-flow
flag, which substantially improves performance.
The specified number of queues is the maximum number of faces you can create on the Ethernet port.
Enabling RxFlow on a NIC that does not support it causes either port creation failure or face creation failure.
Example commands:
# list PCI addresses
dpdk-devbind.py --status-dev net
# change kernel driver binding (only needed for some NICs, see DPDK docs on what driver to use)
sudo dpdk-devbind.py -b uio_pci_generic 04:00.0
# create an Ethernet port with PCI driver, enable RxFlow with 16 queues
ndndpdk-ctrl create-eth-port --pci 04:00.0 --mtu 1500 --rx-flow 16
# or, create an Ethernet port with PCI driver, disable RxFlow
ndndpdk-ctrl create-eth-port --pci 04:00.0 --mtu 1500
See Ethernet adapters known to work page for instructions and examples on select NIC models.
Creating an Ethernet port with PCI driver causes DPDK to assume exclusive control over the PCI device. After that, it is not possible to run other traffic (such as IP) over the same Ethernet adapter. If you are connected over SSH, please ensure the SSH session does not rely on this network interface.
If DPDK does not have a PCI driver for your NIC or you encounter errors while using the PCI driver, you may create the Ethernet port with XDP driver. This driver communicates with the Ethernet adapter via AF_XDP socket, optimized for high performance packet processing.
To create an Ethernet port with XDP driver, you should:
- If NDN-DPDK is running in a container, move the network interface into the container's network namespace.
- Run
ndndpdk-ctrl create-eth-port
command with--netif
and--xdp
flags.
Example commands:
NETIF=eth1
# if NDN-DPDK is running in a Docker container, move the network interface into the container's network namespace
CTPID=$(docker inspect -f '{{.State.Pid}}' ndndpdk-svc)
sudo ip link set $NETIF netns $CTPID
# create an Ethernet port with XDP driver
ndndpdk-ctrl create-eth-port --netif $NETIF --xdp --mtu 1500
XDP driver is installed only if the libbpf and libxdp (part of xdp-tools) are installed before building DPDK.
If you have installed dependencies with the ndndpdk-depends.sh
script, they are installed automatically.
Due to kernel limitation, MTU is limited to about 3300 octets. Setting an unacceptable MTU causes port creation failure.
During XDP driver activation, the Ethernet adapter is configured to have only 1 RX channel and RX-VLAN offload is disabled, and then an XDP program is loaded. NDN-DPDK service presents face locators to the XDP program via a BPF hash map. The XDP program performs header matching on each incoming packet and redirects matching packets to the NDN-DPDK service process. All other traffic will continue to be processed by the kernel.
If neither PCI driver nor XDP driver can be used, as a last resort you may use the AF_PACKET driver. This driver communicates with the Ethernet adapter via AF_PACKET socket, which is substantially slower than the other two options. To create an Ethernet port with AF_PACKET driver, you should:
- If NDN-DPDK is running in a container, move the network interface into the container's network namespace.
- Run
ndndpdk-ctrl create-eth-port
command with--netif
flag.
Example commands:
NETIF=eth1
# if NDN-DPDK is running in a Docker container, move the network interface into the container's network namespace
CTPID=$(docker inspect -f '{{.State.Pid}}' ndndpdk-svc)
sudo ip link set $NETIF netns $CTPID
# create an Ethernet port with AF_PACKET driver
ndndpdk-ctrl create-eth-port --netif $NETIF --mtu 9000
An Ethernet port with AF_PACKET only works reliably for NDN over Ethernet (without VLAN header). While it is possible to create VLAN, UDP, or VXLAN faces on such a port, they may trigger undesirable reactions from the kernel network stack (e.g. ICMP port unreachable packets or UFW drop logs), because the kernel is unaware of NDN-DPDK's UDP endpoint.
By default, DPDK AF_PACKET driver sets PACKET_QDISC_BYPASS socket option, so that outgoing packets do not pass through the kernel's qdisc (traffic control) layer.
If you need to use kernel traffic shaping features (typically with tc
command), you can pass qdisc_bypass=0
argument to the DPDK driver, which disables PACKET_QDISC_BYPASS socket option.
Example GraphQL mutation:
mutation {
createEthPort(
driver: AF_PACKET
netif: "eth1"
devargs: { qdisc_bypass: 0 }
) {
id
}
}
After creating an Ethernet port, you can create Ethernet-based faces on the adapter.
Locator of an Ethernet face has the following fields:
- scheme is set to "ether".
- local and remote are MAC-48 addresses written in six groups of two lower-case hexadecimal digits separated by colons.
- local must be a unicast address.
- remote may be unicast or multicast. Every face is assumed to be point-to-point, even when using a multicast remote address.
- vlan (optional) is an VLAN identifier in the range 0x001-0xFFE. If omitted, the packets do not have VLAN header.
- port (optional) is the EthDev ID as returned by
ndndpdk-ctrl create-eth-port
command. If omitted, local is used to search for a suitable port; if specified, this takes priority over local.
Locator of a UDP tunnel face has the following fields:
- scheme is set to "udpe".
- All fields in "ether" locator are inherited.
- Both local and remote MAC addresses must be unicast.
- localIP and remoteIP are local and remote IP addresses. They may be either IPv4 or IPv6, and must be unicast.
- localUDP and remoteUDP are local and remote UDP port numbers.
Locator of a VXLAN tunnel face has the following fields:
- scheme is set to "vxlan".
- All fields in "udpe" locator, except localUDP and remoteUDP, are inherited.
- UDP destination port number is fixed to 4789; source port is random.
- vxlan is the VXLAN Network Identifier.
- innerLocal and innerRemote are unicast MAC addresses for inner Ethernet header.
- nRxQueues (optional) is the number of RX queues.
When the Ethernet port is using PCI driver and has RxFlow enabled, setting this to greater than 1 could alleviate the bottleneck in forwarder's input thread.
However, it would take up multiple RX queues as specified in
--rx-flow
flag during port creation.
Locator of a GTP-U tunnel face has the following fields:
- scheme is set to "gtp".
- All fields in "udpe" locator, except localUDP and remoteUDP, are inherited.
- UDP source and destination port numbers are fixed to 2152.
- ulTEID and ulQFI are Tunnel Endpoint Identifier and QoS Flow Identifier on the uplink direction, received by NDN-DPDK.
- dlTEID and dlQFI are Tunnel Endpoint Identifier and QoS Flow Identifier on the downlink direction, transmitted from NDN-DPDK.
- innerLocalIP and innerRemoteIP are unicast IP addresses for inner IPv4 header.
See package ethface "UDP, VXLAN, GTP-U tunnel face" section for caveats, limitations, and what faces can coexist on the same port.
A memif face communicates with a local application via shared memory packet interface (memif). Its implementation is in package memifface. Although memif is implemented as an Ethernet device, you do not need to create an Ethernet port for the memif device.
Locator of a memif face has the following fields:
- scheme is set to "memif".
- role is either "server" or "client". It's recommended to use "server" role on NDN-DPDK side and "client" role on application side.
- socketName is the control socket filename. It must be an absolute path not exceeding 108 characters.
- id is the interface identifier in the range 0x00000000-0xFFFFFFFF.
- socketOwner may be set to a tuple
[uid,gid]
to change owner uid:gid of the control socket. It would allow applications to connect to NDN-DPDK without running as root.
A socket face communicates with either a local application or a remote entity via TCP/IP sockets. It supports UDP, TCP, and Unix stream. Its implementation is in package socketface.
Locator of a socket face has the following fields:
- scheme is one of "udp", "tcp", "unix".
- remote is an address string acceptable to Go net.Dial function.
- local (optional) has the same format as remote, and is accepted only with "udp" scheme.
Currently, NDN-DPDK only supports outgoing connections. It cannot open a listening socket and accept incoming connections.
You may have noticed that UDP is supported both as an Ethernet-based face and as a socket face. The differences are:
- Ethernet-based UDP face ("udpe" scheme) runs on a DPDK Ethernet device without going through the kernel network stack. It is fast but does not follow normal IP routing and cannot communicate with local applications.
- Socket UDP face ("udp" scheme) goes through the kernel network stack. It behaves like a normal IP application but is much slower.
If the command or GraphQL mutation for creating an Ethernet port or a face returns an error, detailed error messages are often available in NDN-DPDK service logs. See installation guide "usage" section and Docker container "control the service container" section for how to access NDN-DPDK service logs.
Common mistakes include:
- Trying to create a face before activating NDN-DPDK service.
- NDN-DPDK is running in Docker but the Docker container was started with insufficient privileges and bind mounts.
- Requesting a higher MTU than what's allowed by
.mempool.DIRECT.dataroom
of the activation parameter. - Requesting a driver kind or parameter on an Ethernet adapter that doesn't support it.
- Requesting more RX queues on an Ethernet port than what's supported by the driver.
- Creating too many faces or requesting too many RX queues on the same Ethernet port, exceeding the
--rx-flow
setting.
DPDK supports up to 32 Ethernet devices by default. Both Ethernet ports and memif faces count toward this limit.
If necessary, you can increase this limit at DPDK compile time. Example command:
# build DPDK manually
meson \
-Dmax_ethports=64 \
[other arguments]
# install DPDK with ndndpdk-depends.sh
docs/ndndpdk-depends.sh \
--dpdk-opts='{"max_ethports":64}' \
[other arguments]
# build NDN-DPDK Docker image
docker build \
--build-arg DEPENDS_ARGS='--dpdk-opts={"max_ethports":64}' \
[other arguments]
Problem scenario is similar as the ndnping sample of NDN-DPDK forwarder:
- You have two forwarders A and B connected over a network link.
- The consumer program is connected to forwarder A over memif.
- The producer program is connected to forwarder B over memif.
- Faces were created without immediate errors.
- You expect the consumer program to receive Data packets, but it does not.
|--------| memif |-----------| Ethernet |-----------| memif |--------|
|consumer|---------|forwarder A|------------|forwarder B|---------|producer|
|--------| |-----------| |-----------| |--------|
faces (1) (2) (3) (4) (5) (6)
General steps to troubleshoot this issue is:
-
Confirm that all the faces exist.
Run
ndndpdk-ctrl list-face
command on both forwarders. You should see faces (2) (3) (4) (5).Ethernet faces (3) and (4) are meant to be created manually. Some applications, such as
ndndpdk-godemo
, can automatically create memif faces (2) and (5); other applications, such as the NDN-DPDK fileserver, require you to manually create the memif faces. -
Confirm that the FIB entries exist and point to the correct nexthops.
Run
ndndpdk-ctrl list-fib
command on both forwarders. You should see a FIB entry on each forwarder for the producer's name prefix, pointing to face (3) and (5) respectively. -
Locate where does packet loss start through face counters.
Run
ndndpdk-ctrl get-face --cnt --id
FACE-ID command to retrieve counters of a face. Wait a few seconds and run this command again, and you can observe which counters are increasing.Follow the traffic flow in the order below, and locate the first counter that is not increasing:
- (1) TX-Interest, (2) RX-Interest, (3) TX-Interest, (4) RX-Interest, (5) TX-Interest, (6) RX-Interest.
- (6) TX-Data, (5) RX-Data, (4) TX-Data, (3) RX-Data, (2) TX-Data, (1) RX-Data.
If the loss starts at an RX counter, some possible causes are:
- Mismatched face locator.
- Incorrect MTU configuration, such as MTU larger than what the link supports.
- Insufficient DIRECT mempool capacity: if the DIRECT mempool is full, DPDK silently drops all incoming packets. See performance tuning "memory usage insights" for how to see mempool utilization.
- Packet corruption along the network link.
If this occurs on face (4) RX-Interest, you can stop NDN-DPDK, capture traffic with
tcpdump
, and analyze the packet trace.
If the loss starts at an TX counter, some possible causes are:
- Missing FIB entry.
- For face (1) and (6): The consumer/producer is not sending Interests/Data as you expected.
- For face (4): the Data sent by producer are not satisfying the Interests according to the NDN protocol.
In case an application does not publish face counters, you can skip to the next counter, but then you need to consider the possible causes for the previous step. For example, if you cannot retrieve face (1) TX-Interest counter from the consumer, and you see face (2) RX-Interest not increasing, you should consider "consumer not sending Interests" as a possible cause.
Problem scenario:
- You want to create two or more Ethernet-based faces on the same Ethernet port.
- When you have only one face on the Ethernet port, it works perfectly.
- After you create the second (or third, etc) face on the same Ethernet port, face creation fails with error, or some faces cannot send or receive traffic.
This is probably a DPDK driver limitation. It most frequently occurs when using the PCI driver and enabling RxFlow.
You can try these options, one at a time (restart NDN-DPDK before trying another):
- Increase number of queues for RxFlow.
- Decrease number of RX queues in VXLAN locator.
- Disable RxFlow.
- Switch to XDP or AF_PACKET driver.