I recently attended the FloCon 2017 conference in San Diego. FloCon is a network security conference focused on large-scale network flow analytics and every year draws a large number of enthusiasts who share their experiences in solving different network problems, utilizing different aspects of flow analytics. The focus is of course on detecting any type of unwanted or dangerous traffic in their networks and ultimately preventing any loss of intellectual property.
One of the reasons I always find FloCon interesting is that, unlike so many other industry events, here the speakers are usually people with hands-on experience in the implementation and analysis of the solutions they present. So the audience actually get to hear from engineers or researchers who have spent a lot of time working on the solutions and this takes everything to a whole new level, in my opinion. The presentations are usually technical enough to initiate interesting discussions and offer useful details. Plus, it is easy to approach the speakers during a break to discuss the topics further.
In other words, this is a conference for those who want to share knowledge and experience, and are looking to get as much feedback as possible. The event also draws visitors, like myself, who would like to learn and be able to build solutions that can help these experts do their analytics and implementation, which will enable them to achieve the results they are striving for.
Packet vs. Flow – abstraction is needed
A general opinion among the attendees was that there is a need for more focus on flows. This is because raw packets create a considerably large volume of data to be analyzed and with flows too posing a similar problem, a next level of abstraction would be highly beneficial for almost any form of analysis.
For instance, if you’re connecting your browser to a homepage “xyx” on the internet – this would involve several flows, first for DNS and thereafter, several flows for downloading different parts of the homepage. It would be much more convenient instead, to just have it as an abstraction saying “downloading ‘xyz’ homepage”, at least as long as it’s a “standard” download sequence. This will make it much easier to do any high-level analysis and get rid of all the “nice” or “normal” flows and instead, just focus on the potentially problematic or malicious flows that need the attention.
Another great experience for me as a Napatech employee was to hear several presenters mention how they used our products in their solutions to achieve their goals. And they were all success stories with only positive feedback about our products and the support we have provided. That is particularly gratifying when you are out on the event floor, trying to find out what our end-user customers think of our products. Honestly, we couldn’t have received much better feedback than at FloCon 2017.
Diverse attendee group
The attendees and speakers came from everywhere and represented organizations and companies from all over the industry. From university professionals from Utah and Stanford to telcos like AT&T and Verizon, to companies like Northrop Grumman, Cisco, Facebook and Deloitte & Touche, government and military, represented by DISA and SPAWAR, and off course the Cert organization itself.
The diversity of the attending group as well as the broad spectra of user scenarios presented gave an interesting overview of the thread landscape, which we are all subject to in the internet era. We are all potential targets to a huge and ever-expanding hacking industry.
So, what is my take-away and conclusion from this conference?
A lot of hacking and attacking is taking place on the internet and within organizations today. This will be an ongoing problem that we will be tackling for years to come and we need to be prepared for it. Offloading and acceleration is very much needed to help the analysis applications to be able to focus on the problematic flows or packets and make them scale to the even faster networks of tomorrow. This is where Napatech enables applications to be smarter than the networks they need to manage and protect, and help our customers stay ahead of the data growth curve.