FREE TRIAL
Test form, fit, and function. Our engineers will help you all the way.

1x100G Solution

100% Packet Capture and Uncompromised Analysis

The PCI-SIG® certified NT100E3-1-PTP accelerator can be used for packet capture and analysis of Ethernet LAN data at 100 Gbps with zero packet loss for all frame sizes. Flexible time synchronization support is included with a dedicated PTP port.

200g-200g-compact-icon

PERFECT PERFORMANCE

For any link speed at any time

COMPLETE PORTFOLIO

From 1-200G

plug-play-200g-compact-icon

PLUG & PLAY

Out of the box solution

powerful-200g-compact-icon

POWERFUL

Accelerate your application

SCALE OUTSIDE

Synchronize multiple servers

SCALE INSIDE

Multiple accelerators in one server

IN-LINE

Full throughput with zero packet loss

high-density-200g-compact-icon

MIX SPEEDS

Multiple speeds in one server

BUNDLE APPLICATIONS

More powerful server usage

1x100G Solution Features

napatech-packet-capture-icon-64

Full line-rate packet capture

Napatech accelerators are highly optimized to capture network traffic at full line-rate, with almost no CPU load on the host server, for all frame sizes. Zero-loss packet capture is critical for applications that need to analyze all the network traffic. If anything needs to be discarded, it is a matter of choice by the application, not a limitation of the accelerator. 
 
Standard network interface cards (NICs) are not designed for analysis applications where all traffic on a connection or link needs to be analyzed. NICs are designed for communication where data that is not addressed to the sender or receiver is simply discarded. This means that NICs are not designed to have the capacity to handle the amount of data that is regularly transmitted in bursts on Ethernet connections. In these burst situations, all of the bandwidth of a connection is used, requiring the capacity to analyze all Ethernet frames. Napatech accelerators are designed specifically for this task and provide the maximum theoretical packet capture capacity.

napatech-packet-merge-1

Multi-port packet sequence and merge

Napatech accelerators typically provide multiple ports. Ports are usually paired, with one port receiving upstream packets and another port receiving downstream packets. Since these two flows going in different directions need to be analyzed as one, packets from both ports must be merged into a single analysis stream. Napatech accelerators can sequence and merge packets received on multiple ports in hardware using the precise time stamps of each Ethernet frame. This is highly efficient and offloads a significant and costly task from the analysis application.

There is a growing need for analysis appliances that are able to monitor and analyze multiple points in the network, and even provide a network-wide view of what is happening. Not only does this require multiple accelerators to be installed in a single appliance, but it also requires that the analysis data from all ports on every accelerator be correlated.

With the Napatech Software Suite, it is possible to sequence and merge the analysis data from multiple accelerators into a single analysis stream. The merging is based on the nanosecond precision time stamps of each Ethernet frame, allowing a time-ordered merge of individual data streams.

napatech-Multi-CPU

Intelligent Multi-CPU distribution

Modern servers provide unprecedented processing power with multi-core CPU implementations. This makes standard servers an ideal platform for appliance development. But, to fully harness the processing power of modern servers, it is important that the analysis application is multi-threaded and that the right Ethernet frames are provided to the right CPU core for processing. Not only that, but the frames must be provided at the right time to ensure that analysis can be performed in real time.

Napatech Multi-CPU distribution is built and optimized from our extensive knowledge of server architecture, as well as real life experience from our customers.

Napatech accelerators ensure that identified flows of related Ethernet frames are distributed in an optimal way to the available CPU cores. This ensures that the processing load is balanced across the available processing resources, and that the right frames are being processed by the right CPU cores.

With flow distribution to multiple CPU cores, the throughput performance of the analysis application can be increased linearly with the number of cores, up to 128. Not only that, but the performance can also be scaled by faster processing cores. This highly flexible mechanism enables many different ways of designing a solution and provides the ability to optimize for cost and/or performance.

Napatech accelerators support different distribution schemes that are fully configurable:

  • Distribution per port: all frames captured on a physical port are transferred to the same CPU or a range of CPU cores for processing
  • Distribution per traffic type: frames of the same protocol type are transferred to the same CPU or a range of CPU cores for processing
  • Distribution by flows: frames with the same hash value are sent to the same CPU or a range of CPU cores for processing
  • Combinations of the above

napatech-time-stamp-ico-64

Hardware Time Stamp

The ability to establish the precise time when frames have been captured is critical to many applications.

To achieve this, all Napatech accelerators are capable of providing a high-precision time stamp, sampled with 1 nanosecond resolution, for every frame captured and transmitted.

At 10 Gbps, an Ethernet frame can be received and transmitted every 67 nanoseconds. At 100 Gbps, this time is reduced to 6.7 nanoseconds. This makes nanosecond-precision time-stamping essential for uniquely identifying when a frame is received. This incredible precision also enables you to sequence and merge frames from multiple ports on multiple accelerators into a single, time-ordered analysis stream.

In order to work smoothly in the different operating systems supported, Napatech accelerators support a range of industry standard time stamp formats, and also offer a choice of resolution to suit different types of applications.

64-bit time stamp formats:

  • 2 Windows formats with 10-ns or 100-ns resolution
  • Native UNIX format with 10-ns resolution
  • 2 PCAP formats with 1-ns or 1000-ns resolution

napatech-Performance-packet-buffer

Optimum Cache Utilization

Napatech accelerators use a buffering strategy that allocates a number of large memory buffers where as many packets as possible are placed back-to-back in each buffer.  Using this implementation, only the first access to a packet in the buffer is affected by the access time to external memory. Thanks to cache pre-fetch, the subsequent packets are already in the level 1 cache before the CPU needs them. As hundreds or even thousands of packets can be placed in a buffer, a very high CPU cache performance can be achieved leading to application acceleration.

Buffer configuration can have a dramatic effect on the performance of analysis applications. Different applications have different requirements when it comes to latency or processing. It is therefore extremely important that the number and size of buffers can be optimized for the given application. Napatech accelerators make this possible.

The flexible server buffer structure supported by Napatech accelerators can be optimized for different application requirements. For example, applications needing short latency can have frames delivered in small chunks, optionally with a fixed maximum latency. Applications without latency requirements can benefit data delivered in large chunks, providing more effective server CPU processing by having the data. Applications that need to correlate information distributed across packets can configure larger server buffers (up to 128 GB).

Up to 128 buffers can be configured and combined with Napatech multi-CPU distribution (see “Multi-CPU distribution”).

napatech-Packet-Buffering

On-Board Packet Buffering

Napatech accelerators provide on-board memory for buffering of Ethernet frames. Buffering assures guaranteed delivery of data, even when there is congestion in the delivery of data to the application. There are three potential sources of congestion: the PCI interface, the server platform, and the analysis application.

PCI interfaces provide a fixed bandwidth for transfer of data from the accelerator to the application. This limits the amount of data that can be continuously transferred from the network to the application. For example, a 16-lane PCIe Gen3 interface can transfer up to 115 Gbps of data to the application. If the network speed is 2×100 Gbps, a burst of data cannot be transferred over the PCIe Gen3 interface in real time, since the data rate is twice the maximum PCIe bandwidth. In this case, the onboard packet buffering on the Napatech accelerator can absorb the burst and ensure that none of the data is lost, allowing the frames to be transferred once the burst has passed.

Servers and applications can be configured in such a way that congestion can occur in the server infrastructure or in the application itself. The CPU cores can be busy processing or retrieving data from remote caches and memory locations, which means that new Ethernet frames cannot be transferred from the accelerator.

In addition, the application can be configured with only one or a few processing threads, which can result in the application being overloaded, meaning that new Ethernet frames cannot be transferred. With onboard packet buffering, the Ethernet frames can be delayed until the server or the application is ready to accept them. This ensures that no Ethernet frames are lost and that all the data is made available for analysis when needed.

napatech-Tunneling

Tunneling Support

In mobile networks, all subscriber Internet traffic is carried in GTP (GPRS Tunneling Protocol) or IP-in-IP tunnels between nodes in the mobile core.  IP-in-IP tunnels are also used in enterprise networks. Monitoring traffic over interfaces between these nodes is crucial for assuring Quality of Service (QoS).

Napatech accelerators decode these tunnels, providing the ability to correlate and load balance based on flows inside the tunnels. Analysis applications can use this capability to test, secure, and optimize mobile networks and services. To effectively analyze the multiple services associated with each subscriber, it is important to separate them and analyze each one individually. Napatech accelerators have the capability to identify the contents of tunnels, allowing for analysis of each service used by a subscriber. This quickly provides the needed information to the application, and allows for efficient analysis of network and application traffic. The Napatech features for frame classification, flow identification, filtering, coloring, slicing, and intelligent multi-CPU distribution can thus be applied to the contents of the tunnel rather than the tunnel itself, leading to a more balanced processing and a more efficient analysis.

GTP and IP-in-IP tunneling are powerful features for telecom equipment vendors who need to build mobile network monitoring products. With this feature, Napatech can off-load and accelerate data analysis, allowing customers to focus on optimizing the application, and thereby maximizing the processing resources in standard servers.

napatech-fragment-handling

IP fragment handling

IP fragmentation occurs when larger Ethernet frames need to be broken into several fragments in order to be transmitted across the network. This can be due to limitations in certain parts of the network, typically when GTP tunneling protocols are used. Fragmented frames are a challenge for analysis applications, as all fragments must be identified and potentially reassembled before analysis can be performed. Napatech accelerators can identify fragments of the same frame and ensure that these are associated and sent to the same CPU core for processing. This significantly reduces the processing burden for analysis applications.

In-line application support

The Napatech accelerator family supports 100 Gbps in-line applications enabling customers to create powerful, yet flexible in-line solutions on standard servers. The more CPU-demanding the application is, and the higher the speeds of links, the higher the value of this solution. Features include:

  • Full throughput bidirectional Rx/Tx up to 100G link speed for any packet size
  • Multi-core processing support with up to 128 Rx/Tx streams per accelerator
  • Customizable hash-based load distribution
  • Efficient zero copy roundtrip from Rx to Tx
  • Single bit flip selection to discard or forward each individual packet
  • Typical 50 us roundtrip latency from Rx to Tx fiber

NT100E3-1-PTP accelerator

NT100E3-1-PTP-NEBS accelerator

Napatech Software Suite

Napatech Software Suite provides a well-defined application programming interface as well as support for the well-known, open-source interface libpcap and the Windows variant called WinPcap. This allows programmers to quickly integrate Napatech accelerators for network monitoring and security applications into their system.

A common API is provided for all Napatech accelerators allowing plug-and-play operation. An intuitive, easy-to-learn, yet powerful programming language is also provided to allow dynamic, on-the-fly configuration of filtering and intelligent multi-CPU distribution on Napatech accelerators.

Used across industries

Telecom network management

Our solutions deliver data to applications that monitor critical Ethernet/IP connections in real time, as a supplement to information provided by traditional network nodes and interfaces. This improves underlying network performance and availability. 

Customer experience analysis

Our solution delivers data to applications that analyze streaming quality and transaction performance. These applications enable an agile, data-driven approach to improving services and the quality of customer experience.

Revenue and services optimization

Our solutions deliver data to applications that can analyze subscriber behavior as well as specific app usage, enabling operators to adjust their services and business models to maximize value.

Network performance management

Our solutions deliver data to applications that monitor and troubleshoot all network activity in real time, enabling analysis of network performance metrics from multiple locations in the network. This helps network managers to optimize infrastructure efficiency.

Ultimate tech specs

TECH SPECSNT100E3-1-PTP & NT100E3-1-PTP-NEBS
Network Interfaces• Standard: IEEE 802.3 100 Gbps Ethernet LAN
• Physical interface: 1 x CFP4 port
Supported Modules• 100GBASE-LR4 (Singlemode, 1310 nm)
Performance• Capture rate burst: 1 x 100 Gbps
• Capture rate sustained: 1 x 40 Gbps
• CPU load: < 5%
Hardware Time Stamp• Resolution: 1 ns
• Stratum 3 compliant TCXO
On-Board IEEE 1588-2008 (PTP V2)• Full IEEE 1588-2008 stack
• Packet Delay Variation (PDV) filter
• Master and slave in IEEE 1588-2008 default profile
• PTP slave in IEEE 1588-2008 telecom and power profiles
Time Formats• PCAP-ns/-μs
• NDIS 10 ns/100 ns
• UNIX 10 ns
Time Synchronization• External connectors: Dedicated pluggable
• Internal connectors: 2 for daisy-chain support
Pluggable Options for Time Synchronization• PPS for GPS and CDMA
• IEEE 1588-2008 (PTP v2)
• NT-TS for accelerator-to-accelerator time sync
Host Interface and Memory• Bus type: 16-lane 8 GT/s PCIe Gen3
• Onboard RAM: 8 GB DDR3
• Flash: Support for 2 boot images
Statistics• RMON1 counters plus jumbo frame counters per port
• Frame and byte counters per color (filter) and per host buffer
• Counter sets always delivered as a consistent time-stamped snapshot
Environment for NT100E3-1-PTP• Power consumption: 75 Watts including CFP4 module
• Operating temperature: 0° to 45°C (32° to 113°F)
• Operating humidity: 20% to 80%
• MTBF: 289,880 hours according to UTE C 80-810
Environment for NT100E3-1-PTP-NEBS• Operating temperature: –5 °C to 55 °C (23 °F to 131 °F)
measured around the accelerator
• Operating humidity: 5% to 85%
• Altitude: < 1,800 m
• Airflow: >= 2.5 m/s
• Operating humidity: 5% to 85%
Sensors• Temperature
• Power
OS Support• Linux
• FreeBSD
• Windows
Software• Easy-to-integrate NT-API
• libpcap support
• WinPcap support
• Software PTP stack
Physical Dimensions• 3/4-length PCIe
• Full-height PCIe
Regulatory Approvals and Compliances• PCI-SIG®
• NEBS level 3
• CE
• CB
• RoHS
• REACH
• cURus (UL)
• FCC
• ICES
• VCCI
• C-TICK

Resources and downloads

Try a free 30-day trial

Find out if the Napatech 1x100G solution is right for you
– our engineers will help you all the way.