In this blog, Chief Product Architect Alex Agerholm looks at the standard definition of network flows and explores the great potential that can be realized for lookup technology by seeing beyond conventional flow compositions.
Data processing and computing power are these days moving into the cloud, which is mainly a virtualized environment with huge flexibility and scalability achieved through a massive infrastructure, based on uniform computing platforms working in parallel. This means that the cloud utilizes same servers all over – standard server platforms stacked in huge numbers. It scales fantastically as you can simply add more servers to the pile to make more computing power available.
The flexibility is achieved through a totally virtualized environment where jobs can be scattered all over the cloud and individual tasks can be moved around as needed, even during the execution if needed.
But a solution with such high scalability and flexibility does not come for free. Virtualization isn’t available freely and neither is moving tasks and data around. It always has an overhead associated with it even though it is minimal compared to the jobs. Similarly, the orchestration needed to handle and control this infrastructure is not trivial and is definitely not available freely.
ISSUES IN CLOUDIFICATION
One of the Achilles heels of the uniform cloud approach is that it is very difficult to add hardware acceleration, if needed, as it will mess up the uniformity of the cloud and make it more difficult to scale because it is hard to predict the number of servers with the required hardware acceleration. This can of course be solved by adding hardware acceleration to some servers and a more sophisticated orchestration to schedule jobs to the correct servers based on job types.
Another factor to consider is also the green wave hitting organizations as they strive to reduce their carbon footprint. Data centers are working towards lowering their power consumption by using lower power servers that take up less rack space, reducing both the space and cooling needed to run them. But how does that fit with the cloud approach?
There is no doubt that dedicated purpose-built solutions built for a specific purpose can be architected to scale to perfection, making sure it has the desired computing power needed and only uses the power needed to do the job at hand. Although virtual environments have overheads associated with them, they can often be utilized better as they can run multiple jobs at different times – so when one job or job type is not running it can run something else.
So on one side we want more computing power, and use less rack space and less power and cooling, but on the other hand we want flexibility and scalability to better utilize the servers. This is undoubtedly a tall order and so the question is how can this be solved?
The solution could be to build a compact, scalable and flexible platform that can offer the hardware acceleration when needed, while reducing the power consumption when not needed. And if the hardware acceleration can be made configurable so it can be used for almost any type of job and simply be configured by the orchestration then we will have the solution.
But how can we build such a platform? The answer lies in FPGAs, a technology that has been available for years. FPGAs can help solve this problem and offer purpose-built hardware acceleration. So if we fitted some servers in a datacenter with an FPGA, either as a dedicated PCI adapter with potential Ethernet connectivity or as an FPGA connected directly to the CPU or as FPGA mounted directly on the motherboard, then we would have configurable hardware acceleration available at a pool of servers, if not in every server (something to think about). And with a sophisticated orchestration that is able to dynamically load the FPGA with the appropriate acceleration, purpose built for the jobs running on the server, we will be able to accelerate the jobs and thereby reduce execution time and resource use – we would have achieved a lot.
Ultimately, what this solution brings is the uniformity that enables flexibility and scalability to the cloud service providers and at the same time it offers a platform with heterogeneous hardware acceleration, purpose-built for the different jobs.
To understand the potential of such a solution with configurable hardware acceleration available in the cloud infrastructure, you can simply follow the trends on the internet as FPGAs are being used to accelerate a lot of different jobs from Artificial Intelligence and Machine-learning to simple ultra-fast packet processing in monitoring and analysis.
And very soon we will also see it being used for vSwitch acceleration. This will speed up the virtual environments and at the same time enable real-time monitoring and analysis of not only the north-south traffic but also the east-west traffic, which is constantly increasing and is currently not being monitored at the same level and thereby, actually becoming a security threat in some situations.
- cloud architecture