Vladimir Timofeev | Dreamstime.com
Data Center Promo
Data Center Promo
Data Center Promo
Data Center Promo
Data Center Promo

Defeat Data Growth with Hardware Acceleration (Part 1)

Aug. 18, 2020
Could FPGAs be the answer to the demise of Moore’s Law? Part 1 of this two-part series offers a snapshot of various approaches being used to address today’s and tomorrow’s processing demands.

What you'll learn

  • Why is Moore’s Law running out of steam?
  • Find out how AI and GPGPUs are filling the performance gap.
  • Check out network technologies like SmartNICs that incorporate FPGAs.

Moore’s Law predicted that server CPU processing power would double every 18 months. Since the 1980s, this law has been the driving assumption behind the creation of modern network technologies. It’s been a given—until recently. But growth in server CPU power isn’t infinite; it’s clear now that this growth cannot continue. What happens now that Moore’s Law is defunct?

The End of an Era

Actually, Moore’s Law has been in decline for some time, with various activities prolonging the performance curve. It’s not just Moore’s Law coming to an end with respect to processor performance, but also Dennard Scaling and Amdahl’s Law. Processor performance over the last 40 years and the decline of these laws is illustrated in the figure.

Gordon Moore made his famous initial prediction based on the advent of RISC computing in the 1980s. It faithfully showed a doubling in processor performance every 18 months. But, as the limits of clock frequency per chip began to appear, the use of Dennard scaling and multicore CPUs helped prolong the performance curve. It’s important to note, though, that even at the start of the century, we were no longer on the Moore’s Law curve, and doubling of performance took 3.5 years during this time.

Another prediction has to do with the limits of performance improvement that’s achievable with parallel processing: Amdahl’s Law. While parallelizing the execution of a process can provide an initial performance boost, there will always be a natural limit, because some execution tasks can’t be parallelized. We’ve recently experienced that these limits come into effect when the benefits of using multiple CPU cores decrease, leading to an even longer time span between performance improvements.

The figure above demonstrates that it’s now predicted to take 20 years for CPU processing power to double in performance. Hence, Moore’s Law is dead.

Putting Assumptions on Tilt

True since the 1980s, spanning the entire working life of many computer engineers, Moore’s Law has been such a dependable and reliable phenomenon for so long that many of us have taken it for granted. Most of us can’t envision a world where this law isn’t true.

Aside from the sense of nostalgia, a real concern persists that the premise of Moore’s Law led to the creation of entire industries which now depend on the expectation of constant processing performance improvement.

This includes the software industry, which operates on the expectation that processing power will increase in line with data growth and will be able to service the processing needs of future software. Therefore, efficiency in software architecture and design is less integral. Indeed, there’s an ever-increasing use of software abstraction layers to make programming and scripting more user-friendly, but at the cost of processing power.

Let’s look, for instance, at virtualization. It’s a widely used software abstraction of underlying physical resources that creates an additional processing cost. On one hand, virtualization makes more efficient use of hardware resources. On the other hand, the reliance on server CPUs as generic processors for both virtualized software execution and processing of input/output data places a considerable burden on CPU processors.

To fully grasp the fallout from this reliance, consider the cloud industry and, more recently, the telecom industry. The cloud industry has been founded on the premise that standard commercial-off-the-shelf (COTS) servers are powerful enough to process any type of computing workload. Using virtualization, containerization, and other abstractions, it’s possible to share server resources amongst multiple tenant clients with “as-a-service” models.

Telecoms have noted this framework with admiration, as well as the success of cloud companies to replicate this approach for their networks with initiatives such as software-defined networking (SDN), network function virtualization (NFV), and cloud-native computing. However, the underlying business model assumption here is that as the number of clients and volume of work increases, simply adding more servers will suffice. But, as can be clearly seen in the figure, server processing performance will only grow 3% per year over the next 20 years. This is far below the expectation that the amount of data to be processed will triple over the next five years.

Modern Acceleration

At this point, it makes sense to ask why these slowdown issues haven’t been more obvious. Cloud companies seem to be succeeding without any signs of performance issues. The answer is hardware acceleration.

Perhaps the reason that cloud companies are plugging along so successfully is that they were the first to recognize the death of Moore’s Law and experience the performance issues related to it. But the pragmatism that led cloud companies to be successful also influenced their reaction to this situation. If server CPU performance power will not increase as expected, then they would need to add processing power. In other words, there’s a need to accelerate the server hardware.

Turing Award prize winners John Hennessey and David Patterson address the end of Moore’s Law. They point to the example of domain-specific architectures (DSAs), which are purpose-built processors that accelerate a few application-specific tasks.

In this model, different kinds of processors are tailored to the needs of specific tasks rather than using general-purpose processors like CPUs to process a multitude of tasks. An example they use is the tensor processing unit (TPU) chip built by Google for deep-neural-network inference tasks. The TPU chip was built specifically for this task, and since this is central to Google’s business, it makes perfect sense to offload to a specific processing chip.

Acceleration technologies have become standard among cloud companies to accelerate their workloads. They have adapted graphics processing units (GPUs) to support a wide variety of applications and are sometimes used for hardware acceleration. Network processing units (NPUs) have been widely used for networking. Both options offer a huge number of smaller processors that then break down workloads and parallelize them to run on a series of these smaller processors.

In Part 2 of this series, we will look at a new option for accelerating workloads based on a technology that has been around for a while—namely, FPGAs. Could they be the answer to the demise of Moore’s Law?

Daniel Proch is Vice President of Product Management at Napatech.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!