The 16.07 release of DPDK introduces a new packet capture framework, which will allow users to capture traffic from existing devices/ports/queues and dump the packets to a pcap file...
The need for compact 100G monitoring solutions
Over the last few years, the increase in internet bandwidth has caused carriers to invest in equipment that can meet the requirements of the increasing network traffic. The rapid growth in 100G network speeds has led to the deployment of new high volume network equipment. The first phase involves upgrading or replacing the switch and router equipment. This naturally leads to a need for network monitoring and subsequent investments in 100G network monitoring equipment.
I have discussed more about the 100G technology in my previous blog on the page: The Challenges of 100G Network Analysis
As a result of all the new or upgraded equipment being introduced, there arises a new challenge when it comes to rack space, power consumption and cooling. The combination of a Common off the Shelf server (COTS) and a 100G network adapter is a commonly used approach for building a 100G network monitoring solution. A small 1U COTS server is the best fit to meet the challenges of limited rack space in data centers or the telco central offices. The 100G network adapter must comply with half-length PCI form factor in order to fit into a compact 1U COTS server.
But there are some challenges in using compact PCIe Gen3 based 100G solutions.
Challenges of compact PCIe Gen3 based 100G monitoring solutions
In order to capture the full 100G traffic and transfer it to the server host memory, a 16-lane PCIe Gen3 throughput is required. The FPGA technologies available today do not support native 16-lane PCIe Gen 3, but supports a 2 x 8-lane PCIe Gen3 interface. Supporting a native 16-lane PCIe Gen 3 requires a PCIe switch component mounted on the network adapter. Furthermore, gigabytes of on-board memory are also needed to ensure packet delivery in case the server system gets overloaded at peak conditions. After installing the major components (FPGA, SDRAM, network ports and power supplies) on a half-length PCIe form factor, there is usually no space for a PCIe switch component, which can have certain consequences?
To enable the throughput of 16-lane PCIe Gen3, the COTS server must support PCIe bifurcation.
What is PCIe bifurcation?
PCI Express bifurcation means splitting the PCI Express bus into smaller buses, and in this case splitting a 16-lane PCIe Gen 3 into a 2 x 8-lane PCIe Gen3 interface. Servers supporting PCIe bifurcation have a BIOS setting for enabling the feature. When a PCIe bifurcation is enabled, the data that would usually be transferred over a 16-lane PCIe Gen 3 is split into 2 parallel 8-lane PCIe Gen3 interfaces. The network adapter supporting the PCIe bifurcation enables the same data transfer scheme, using 2 x PCIe Gen3 interface.
Server support for bifurcation
One of the major benefits of building network monitoring solutions based on COTS servers is the freedom to freely select the server vendor. Not all server vendors support bifurcation today, but there is an increasing pressure from the industry due to the need for implementing compact solutions based on the most commonly available FPGA technology. HP and Supermicro have servers that support PCIe bifurcation today, and I believe that other major vendors will follow suit soon.
As FPGA technologies evolve, more and more features are getting integrated into the hardened part of the FPGA. Hardened 16-lane PCIe Gen3 will most likely be on the roadmaps from the leading FPGA vendors and available in the market in 2017-2018. Consequently, it will be possible to build compact NICs with full support for 16-lane PCIe Gen3 support and the bifurcation issue will not be relevant any more.
Anyway, when stepping up to PCIe Gen 4, we will probably see the same scenario again, followed by yet another FPGA roadmap to solve the problem.