Skip to content

In the Age of FPGA

When the Apollo program started back in 1961, it also started a technology revolution that has brought us everything from calculators to computers and a lot of other stuff that we take for granted today.

A few years later, in 1965 to be exact, Gordon Moore, the co-founder of Intel, made a projection based on observations: that the number of components per integrated circuit would double every year. Ten years later, in 1975, he revised the forecast, to double every two years. The period though is often 18 months due to Intel executive David House’s prediction that chip performance would double every 18 months, due to the sheer increase in the numbers and speed of transistors.

Figure 1: Moore's Law

As seen in the picture above, this prediction has come true with precision to this day and this principle is known as Moore’s Law.

But where does the FPGA come into the picture? FPGA’s history starts in the 80s where the 2 biggest FPGA vendors today: Altera (now Intel) and Xilinx were both founded. So, the FPGA technology has now been around for more than 30 years and has been used to accelerate the development of chip-based solutions, offering a much faster development cycle than ASICs, and with a much lower investment and risk due to the re-programmability. FPGA in fact brings the software development methodology to the hardware – you program, build, download and test – and you can then do it over and over again. This is very nice, flexible and reduces the time to market significantly, while adding the option to do updates in the field afterwards.

What brings it all together?

The amount of data we produce and handle is exploding. The amount of data is almost doubling every two years, and as you can see from the figure below, this development is predicted to continue over the years to come.

Figure 2 The annual size of the global data sphere

Due to this massive data growth, the increasing need for getting insights out of the data with low latency and the use of this data in “real time”, the requirements for compute power has not slowed down but is increasing all the time. With CPUs struggling to just keep up with Moore’s Law, it is even more challenging to follow the data explosion, and the service providers have therefore been searching for alternative ways to cope with this challenge.

Graphics Processing Units (GPUs) have been used to offload the CPU for some compute-intensive work. The GPUs have a huge number of very low performance kernels that can all operate in parallel, which is very well suited for some types of compute-intensive work like machine learning and artificial intelligence.

Different types of Network Processing Units (NPUs) have been in the game as well. These are purpose-built processors build with a focus on doing packet processing on networks. They are very fast at doing packet and flow-based processing, but not that general-purpose and therefore mainly used in networking equipment, which is also what they are built for.

A few years back the FPGA was identified as another alternative, as very flexible, highly-reprogrammable and power-efficient.

The figure below shows how the CPU, GPU, FPGA and ASIC technologies differ – from the CPU being highly flexible and fast reprogrammable but not that efficient on compute power per watt, to ASICS being highly efficient in compute power per watt but not flexible at all with no re-programmability and with yearlong development cycles. This comparison shows that the FPGA is a winner, reasonably efficient on compute power per watt, very re-programmable and relatively power efficient.

Figure 3 CPU vs. GPU vs. FPGA vs. ASIC

FPGA has shown that it can provide significant acceleration and a high level of reconfigurability and doing all that with an efficient compute power per watt.

So where does all this lead and who is the winner?

Over the last few years, hyperscale data centers and service providers have been working with this issue and have struggled to find solutions. Back in 2010, Microsoft’s Azure team started to look at FPGA technology. In 2012, they set up a prototype with 60 FPGA boards to accelerate things like Bing’s search index rankings and it was so promising that in 2013 they put 1600 FPGA boards into the production network. With this they realized that FPGA was really useful for accelerating their tasks at the correct point of cost vs. performance. In 2014, they introduced a new FPGA board where they, based on experiences with the first boards and some experienced network issues, put FPGA inline (between the network and the CPU) enabling them to also accelerate the networking part of their servers. So, from late 2015 they started putting FPGA in every server, and that was even before they had the software ready, which only happened in 2016.

This case is just one of many, which shows that FPGAs are really gaining momentum as the preferred acceleration engine within the cloud and service providers. Today Microsoft Azure, AWS and others offer FPGA-as-a-Service to their customers, enabling the same type of acceleration to their customers as they utilize themselves internally.

Another important milestone in the FPGA story was when Intel in 2015 acquired Altera, one of the two major FPGA vendors in the industry. This for me proves that Intel then realized that the FPGA is and will be a key component in server infrastructure, and since then Intel has been working on enabling FPGA technology in servers, presenting Skylake processors with FPGA embedded in the CPU package.

So, for me the time of the FPGA is here. FPGA is the winning ‘Reconfigurable Compute Platform’, providing flexible acceleration at the right efficiency. they want.

But be prepared for the next step in the evolution as you never know what tomorrow brings due to the speed of technological advancements. One thing is for sure, that the FPGA technology will be around in the years to come.

Back To Top