Skip to content

Packet Capture As a Service

I previously wrote a blog called, “Packet Capture the Complete Source to the Truth”, because I truly believe that packet capture is the ultimate source to the truth, as all the details are saved and can be inspected and analyzed over and over again, until the issues (if any) have been clarified or solved.

Virtualization

These days the world is rapidly getting virtualized, moving application and services into cloud environments, be it private, public or hybrid clouds. Because of this, the need for packet capture has only increased and become a bigger necessity. With this current trend, not only is the packet capture service going virtual, but is also being fitted into the cloud infrastructure to offer the same level of flexibility and scalability as the applications it monitors. Furthermore, it will also have to offer the capability to monitor all the data flowing between the virtual applications in the cloud, and this includes all the data packets that travel in and out of the servers and onto the network that interconnects the servers – known as North-South traffic, and all the data packets that travel between the virtual machines inside the individual servers – known as East-West traffic. The conclusion on this is that we need Packet-Capture-as-a-Service in order for it to be instantiated directly in the same cloud environment as all the applications, and we need it now.

In a cloud structure with a huge number of virtual applications running and a significant number of these being packet capture instances, it becomes very important how they are managed. Being able to get a group of packet capture applications to “work together” as one service, and making it easy to get access to all the captured data as one database that can be accessed through one common entity is a key feature. It is very important to remember that we are talking about a huge database here that will be measured in peta bytes for large customers.

This sort of scalability will be a key differentiator, as fast and easy access to data is very important when you need it. This calls for some sort of a distributed “database” scheme that can makes it possible to access and search a huge number of individual databases, that come from each packet capture instance, in a very fast and effective way. At Napatech, we already provide this type of support on our current packet capture applications and are currently researching the possibility of providing the service as part of virtual implementation. If we can offer this to our customers it would become key to utilizing captured data easily and effectively.

To make the access to the data even easier and faster to digest, it would be beneficial to not only have access to the raw packet data but also a flow database that will be smaller. This would also make it quicker to access and search, and when the correct flow has been identified the corresponding raw data packets can easily be retrieved from the packet database.

Conclusion

Packet capture is still the complete source to the truth – and maybe even more than before as applications are now coming, going and moving around, while we are also being subject to attacks that happen locally inside a single server, between applications. They can therefore, only can be detected through monitoring of the East-West traffic, which is also why we need to capture this data for any post analysis.

Another important thing with virtual environments is that when a cyberattack is detected, the applications involved in it are long gone and then packet capture is the only trace that exists along with any logs from the orchestration.

Back To Top