Pokemon Go has taken the world by storm. Nintendo have done it again. Just like the Nintendo Wii changed video game players from couch potatoes into physically active game participants...
What a difference a year makes! At Mobile World Congress 2015, NFV was one of the catchwords of the conference. Every self-respecting vendor had something to say about NFV. Ambitions were dizzyingly high and everyone was giddy with expectation.
Mobile World Congress 2016 was most memorable for the muted mentions of NFV. There were many vendors with NFV solutions, but it didn’t capture the imagination to the same extent as the year before. One would be forgiven for feeling that the industry had lost interest and moved on to more exciting topics like 5G and IoT.
So, what is the status of NFV? Are we quietly delivering on the promise of NFV, so eagerly embraced in 2015 or have we entered the trough of disillusionment?
From Puritanism to pragmatism
2015 was the year when NFV progressed from proof-of-concept to practical solutions. Inevitably, the gaps and shortcomings began to appear.
This should surprise nobody as every technology needs to go through these teething issues. What has been interesting is the sudden switch from a puritan adherence to the guiding principles of NFV, as first articulated in the famous NFV whitepapers and further elaborated by the work of the ETSI WG, to an extreme pragmatism in solving performance gaps and other issues in NFV.
From 2012 to 2015, it was generally agreed that NFV solutions had to be open, non-proprietary and not lead to vendor lock-in. They should support the NFV vision of virtual functions that could be service-chained and freely deployed in a multi-server, virtualized environment on a common, standard hardware platform.
Yet, if we look at the solutions that have been adopted, also here in 2016, we see a different story. SR-IOV has been accepted as a solution to NFV performance issues even though SR-IOV is not standardized, is highly proprietary to the specific NIC vendor and is a hindrance to virtual function mobility. Whereas hardware acceleration and hardware offload were held in disregard up to 2015, they are now de-rigueur with OVS fully offloaded to NPU-based network adapters!
Solving micro-problems and creating macro-problems
This pragmatic approach is a welcome and necessary development as it would be hard to envisage any progress being made otherwise. The question is whether we are solving micro-problems and creating macro-problems.
SR-IOV is a good case in point. The reason SR-IOV was adopted was because virtual switches, Open Virtual Switch (OVS) in particular, did not appear to provide the performance needed. If one concludes that OVS is the bottleneck, the obvious answer is to bypass OVS, which is exactly what has been done with SR-IOV. Problem solved, right? Wrong!
One of the consequences of SR-IOV is that, now, virtual functions are tied directly to the NIC. In other words, they cannot be moved (or at least not without complication and difficulty). So, now, one of the key advantages of NFV has been lost; the ability to move virtual functions, on a need basis, in order to optimize resource usage.
Remember that virtualization was originally introduced to optimize the use of compute resources in data centers and thereby reduce the operational cost of running a data center. This focus on cost, space and power has not diminished and is still the main concern of any data center manager. Removing the ability to optimize cost by hindering virtual function mobility would therefore gravely undermine the overall NFV business case!
SR-IOV is, therefore, a great example of solving one problem, but creating another. But, it doesn’t stop there. It is also an example of jumping to conclusions.
Focus on the right problems in the context of the bigger picture
The assumption underlying the adoption of SR-IOV was that OVS was the bottleneck and it had to be bypassed in order to achieve performance. This assumption is not true. In fact, OVS can run at full throughput and use very few server CPU resources in doing so. Napatech demonstrated this at Mobile World Congress this year.
The bottleneck was not OVS, but the standard NICs being used to get data in and out of the server. By replacing the standard NIC with a NIC that is designed for NFV supporting full theoretical throughput and efficient transfer of data to OVS and onwards to the virtual function, it is possible to get OVS+DPDK to run, in software, at 40 Gbps (60 Mpps) using less than one CPU core. This was achieved without bypassing OVS or by running the entire OVS on the NIC, which is a solution also being proposed right now.
Napatech NFV NIC presented at Mobile World Congress 2016
Another aspect of this problem that needs to be taken into account is the impact on Management and Orchestration (MANO). To say that there are challenges at the MANO level right now would be putting things lightly. There appears to be no consensus on the way forward. In addition, MANO, as a concept, is very new to the telecom environment and will take time for everyone to understand, let alone and adopt and trust to take care of day-to-day operations.
We should therefore be careful of introducing solutions at the NFV infrastructure level, which can have a significant complicating impact at the MANO level. What is called for right now is a simple MANO solution that can be trusted to get the job done. The NFV infrastructure should therefore adapt and support this as much as possible. MANO will mature and become more sophisticated and even now there might well be MANO implementations, which are far advanced. But, at an industry level, it would be fool-hardy to introduce NFV infrastructure solutions that would require sophisticated MANO workflows, as then we are unlikely to ever get off the ground with NFV.
So, while pragmatism is applaudable, it needs to be focused on the real problems and also consider the bigger picture. Only then can we provide solutions that can accelerate NFV adoption and begin to deliver on the promise of NFV that we all hoped for less than one year ago.