Peter Sanders, VP Field Applications Engineering at Napatech, looks at the importance of zero packet loss to a successful IDS deployment - measured by its effects on intrusion alert generation and file extraction.
Stage 3: Orchestrated Reconfigurable Computing
In my previous blogs, I have described the first two stages of reconfigurable computing focusing on solution-driven reconfigurable computing and customer-invoked reconfigurable computing. I talked about the ability to change the behavior of a system through software due to the ability to reconfigure FPGAs on-the-fly. In the last of this 3-part series, I would like to focus on orchestrated reconfigurable computing with virtualization and virtual functions being a key focus for deployment.
In part 2 of this series of blogs, I talked about the software frameworks required for customer-invoked reconfigurable computing. Once these software frameworks are in place to enable direct programming and partial reconfiguration of FPGAs, the final hurdle to be crossed is the orchestration of FPGA-based functionality. Whether this is P4 software or FPGA images and blocks for partial reconfiguration, a mechanism is required to orchestrate the deployment of FPGA-based software/firmware on available FPGA resources in a data center.
Currently, there is no standard mechanism for achieving this today, but it will soon be required as FPGAs are increasingly used to accelerate workloads and functions. Both Intel and Xilinx are working on these challenges as part of their OPAE and SDAccel software frameworks, but it is still early days.
The situation that needs to be considered is the need for automated deployment and reconfiguration of data center infrastructure. For example, if we consider a virtual environment, reconfigurable computing can be used to accelerate functions such as encryption, compression or transcoding. But, where this acceleration functionality is required is first known when a service function chain is created, which requires this hardware acceleration. It is at this point that orchestration is required to understand where in the datacenter available FPGA-based reconfigurable computing platform resources are located, which kind of FPGA-based functionality is available to support the hardware acceleration required, whether there is a match between the available FPGA resources and the functionality identified and finally the actual deployment of the FPGA functionality on the available resources and assurance that the functionality is working.
Today, orchestration is focused on virtual functions and containers for processor-based functionality, but this will soon need to be extended to also support FPGA-based functionality.
As we move from virtualization and containers to server-less computing and lambda or anonymous functions, then it is not a stretch of the imagination to consider the possibility that some of these functions will be instantiated on processors, while others will be instantiated on FPGAs depending on the nature of the function.
I have full confidence that the challenges I have mentioned above will be addressed as the power, performance and flexibility of reconfigurable computing is a perfect match for the highly dynamic data centers of the near future.
Nevertheless, while the required frameworks are evolving, IT organizations do not have to wait and see, but can engage with qualified and experienced reconfigurable computing solution providers such as Napatech to begin benefitting from reconfigurable computing immediately.
Here you can download the Napatech Reconfigurable Computing flyer