200G Performance Solution
Built for performance on an epic scale with two
PCI-SIG certified NT100E3-1-PTP accelerators.
For any link speed at any time
PLUG & PLAY
Out of the box solution
Accelerate your application
Multiple accelerators in one server
Synchronize multiple servers
Multiple speeds in one server
More powerful server usage
200G Performance Features
Full line-rate packet capture
Multi-port packet sequence and merge
Intelligent Multi-CPU distribution
- Distribution per port: all frames captured on a physical port are transferred to the same CPU or a range of CPU cores for processing
- Distribution per traffic type: frames of the same protocol type are transferred to the same CPU or a range of CPU cores for processing
- Distribution by flows: frames with the same hash value are sent to the same CPU or a range of CPU cores for processing
- Combinations of the above
Hardware Time Stamp
- 2 Windows formats with 10-ns or 100-ns resolution
- Native UNIX format with 10-ns resolution
- 2 PCAP formats with 1-ns or 1000-ns resolution
Optimum Cache Utilization
On-Board Packet Buffering
Napatech accelerators provide on-board memory for buffering of Ethernet frames. Buffering assures guaranteed delivery of data, even when there is congestion in the delivery of data to the application. There are three potential sources of congestion: the PCI interface, the server platform, and the analysis application.
PCI interfaces provide a fixed bandwidth for transfer of data from the accelerator to the application. This limits the amount of data that can be continuously transferred from the network to the application. For example, a 16-lane PCIe Gen3 interface can transfer up to 115 Gbps of data to the application. If the network speed is 2×100 Gbps, a burst of data cannot be transferred over the PCIe Gen3 interface in real time, since the data rate is twice the maximum transferable rate that can be transferred over the PCIe Gen3 interface. In this case, the onboard packet buffering on the Napatech accelerator can absorb the burst and ensure that none of the data is lost, allowing the frames to be transferred once the burst has passed.
Servers and applications can be configured in such a way that congestion can occur in the server infrastructure or in the application itself. The CPU cores can be busy processing or retrieving data from remote caches and memory locations, which means that new Ethernet frames cannot be transferred from the accelerator.
In addition, the application can be configured with only one or a few processing threads, which can result in the application being overloaded, meaning that new Ethernet frames cannot be transferred. With onboard packet buffering, the Ethernet frames can be delayed until the server or the application is ready to accept them. This ensures that no Ethernet frames are lost and that all the data is made available for analysis when needed.
IP fragment handling
Accelerate your time-to-market and reduce risk
Napatech Software Suite provides an efficient migration path by allowing you to mix and match ports and speeds. An advanced cooling design assures the required airflow while sensors monitor voltage, power, and temperature.
A common API is provided for all Napatech accelerators allowing plug-and-play operation. An intuitive, easy-to-learn, yet powerful programming language is also provided to allow dynamic, on-the-fly configuration of filtering and intelligent multi-CPU distribution on Napatech accelerators.
Used across industries
Telecom network management
Quality of experience optimization
Application performance management
Security data collection
Ultimate tech specs.
|TECH SPECS||NAPATECH 200G PERFORMANCE SOLUTION|
|Hardware Time Stamp|
|On-Board IEEE 1588-2008 (PTP V2)|
|Pluggable Options for Time Synchronization|
|Host Interface and Memory|
|Environment for NT200C01-2|
|Regulatory Approvals and Compliances|