Skip to content

All in Good Time

From the dictionary of idioms, “all in good time” is described as the idea that for those who wait, and are patient, good things will happen. But in the world of network visibility and monitoring…

From the dictionary of idioms, “all in good time” is described as the idea that for those who wait, and are patient, good things will happen. But in the world of network visibility and monitoring, the phrase should read “all in accurate time”, where the ability to see everything accurately and with high-fidelity is extremely important.

The importance of seeing all cannot be understated. If all can’t be seen, then how can you know exactly what or who is using your network? And if the activity is malicious in nature, what is happening, and how is it being done? If the network monitoring tools being used cannot see all, then an accurate view cannot be guaranteed. Many of these tools, whether they are commercial or open source software that will run on the available hardware, or a turnkey appliance product, may not be seeing everything on the network that is being analyzed. This is mainly because the underlying hardware devices that are physically connected to the network were never designed to see all.

These network interfaces (NICs) were designed for high performance endpoint communications, meaning that the NIC really only has to see all messages (packets) within the conversations it is engaged in, not every packet, from every conversation on the network. Some NICs are very good at seeing most, but when it comes to micro bursts of activity (when network utilization hits 100% for very short periods of time), especially with small packet sizes, they typically fall short. NICs specifically designed for capturing all traffic have dedicated buffer memory, and the ability to have large buffers in host memory, which can essentially eliminate packet loss.

When it comes to timing in network monitoring, good time is accurate time. When something bad occurs on the network, when and in what order it happened is extremely important. Accurate timing falls into three main areas – packet timestamping, packet time ordering, and timestamp clock synchronization.

For precision network and or application performance analysis, where inter-packet timing is important, a hardware derived timestamp is a requirement. This allows precise relative latency measurement between packets. Kernel (software) derived timestamps continue to get better and better as processor cores get faster, and the performance of the kernel clock continues to be optimized. As these improvements are happening, network speeds are increasing. 40Gbps and 100Gbps networks are becoming more common, and at these speeds, many of the packets captured will have identical timestamps because the resolution of the timestamp clock isn’t precise enough. And if this monitoring is being done with a standard NIC, packets on a given network segment will arrive on different interfaces with different timing. In this scenario it is impossible to know which packet came before another.

Many specialized packet capture NICs have the ability to merge packets from many ports into a single stream in hardware, which guarantees that the application software can assume that packets are being delivered in the exact order they traversed the network. If the packets are not delivered in order to the application, it is nearly impossible to sort them in time order using only software. When it comes to applications like Suricata (an open source IDS application) start receiving packets in the wrong order, those packets and flows are by default dropped and not analyzed any further. If the packet ordering doesn’t make sense, these types of cybersecurity applications do not continue the processing to a deeper level. There could be suspicious activity on the network that goes undetected because these malicious flows are dropped before any serious threat detection processing is completed.

Latency in today’s modern high-speed networks are continuing to shrink. Even ISPs providing internet service to homes can have sub millisecond latency performance! To accurately measure the time it takes for a packet to travel from host A, to host B, both hosts must be precisely synchronized to the same master clock. The accuracy limitations of a kernel-derived timestamp clock also impact the ability to accurately synchronize that kernel clock to an external source. It is not possible to know (even to the microsecond) exactly when something happened, or how long it took to happen. This can affect the accuracy of the overall picture being taken of the network at any point in time.

 

Article written by: Peter Sanders, VP Global Sales Engineering at Napatech.

Back To Top