Following some recent work to run VXLAN using HW offload, the Napatech OVS offload initiative now explores another important feature: mirror offload. Learn how this enables monitoring without affecting performance of the overall system.
I visited the NFV World Congress for the first time last year. It was the year when the ideology of having ‘everything’ in software (SW) was the main topic and if one dared to ask if hardware (HW) acceleration were to be used somewhere you were boohooed. The year 2015 was also very much about orchestration without much on actual performance or infrastructure, except for OPNFV who were actually paying attention to having a solid infrastructure before moving further. It was with great enthusiasm that I went to the congress this time – would they at this point have realized the need for a good infrastructure and the need for HW acceleration?
Interesting enough it seems like either the idealistic SW-only people were missing from the conference or times have changed and people are now more friendly towards acceleration, because I didn’t find any boohooing when acceleration was mentioned. On the contrary, it seemed there was quite some interest around it. The reason, I believe, is 5G and the throughput/latency requirements, which I do not see being handled without acceleration at some level. I especially liked the presentation from Ramki Krishnan, the Co-Chair of the NFVRG, where he talked about some of the next things the NFV Research Group will be looking at “Infrastructure Service Assurance – Road to 5G”.
Now some might argue that this isn’t HW-acceleration but I would argue that it is, it is just the CPUs providing the extra tweaks needed to ensure predictable performance. Next step, in my mind, will be to have the same level of configuration on the NICs to ensure proper QoS?
SR-IOV – SHOULD I STAY OR SHOULD I GO?
One of the things I had on my agenda for the NFV World Congress was to figure out the need for SR-IOV. My take on the matter is that the general idea is still that SR-IOV solve the performance/latency issues. But, I would argue that the same can be achieved by introducing the correct level of HW acceleration and OVS configuration.
THE WAR OF MANO
After having attended a few sessions and key-notes I felt like this year’s NFV World Congress would be about MANO (again), but this time with more intensity, and it seems like the war of MANO is about to start. One of the problems NFV is facing right now is the diversity of choices, which is both good and bad. It’s good from an innovation perspective because everyone is competing to have the edge, but one could also argue that from a quality and usability perspective it would be nice to have something solid, especially when moving towards deployment. One of my favourite MANO is the one from Telefónica, OpenMANO. I have not worked with any MANO, so my choice is purely based on the presentations I have seen and the talks I have attended. OpenMANO seems to be taking the things into account that I would have taken into account if I had developed the system myself, things like ensuring that the cache, CPU and IO are all locally to the VNF to ensure optimal performance. Accessing a remote IO across the QPI not only give poor latency but it also pollutes the cache of the other CPU because of the Direct Data-cache IO (DDIO). The cache pollution will affect the VNFs running on that CPU so the noisy neighbour problem is not only between cores on the same CPU but also across sockets when IO devices are involved.