Skip to content

Surviving the Death of Moore’s Law

Recently, I had the honor of hosting a webinar together with IHS Markit analyst Vlad Galabov on the topic of reconfigurable computing.

The webinar rose a number of thought provoking issues, but one in particular was the demise of Moore’s law. When you stop to think about it, the entire IT industry is driven by the premise that Moore’s law will continue to provide double the number of transistors per square inch every 18 months and thereby help us to keep up with the relentless growth in data to be processed. What happens when this is no longer true?

Just think about telecom carriers. They are currently experiencing the “scissor effect” where costs continue to rise in line with the growth in data, but revenues stay the same or even decline. What happens when they have to invest even more in data processing power just to achieve the same outcome as today?

Just think about cloud service providers, who until now have seemed invincible. They have found a way to create a successful business model even in the face of an exponential data growth curve. Nevertheless, even cloud service providers will face challenges as “simply adding more servers” will no longer be enough to stay one-step ahead of the data growth curve. What then?

James Hennessey and David Patterson of Stanford have been using some time this year addressing this very issue. In their determination, the death of Moore’s law (and Dennard’s law before it) is actually kick starting a new “golden age” in computer and software architecture innovation.

“Revolutionary new hardware architectures and new software languages, tailored to dealing with specific kinds of computing problems, are just waiting to be developed,” he said. “There are Turing Awards waiting to be picked up if people would just work on these things.” (David Patterson interviewed in IEEE Spectrum Magazine article “David Patterson Says It’s Time for New Computer Architectures and Software Languages”, Sep 2018).

Both Hennessey and Patterson provide the example of Domain Specific Architectures (DSA), which are purpose-built processors that try to accelerate a few application-specific tasks. The idea here is that instead of having general-purpose processors like CPUs to process a multitude of tasks, different kinds of processors are tailored to the needs of specific tasks. An example they use is the Tensor Processing Unit (TPU) chip built by Google for Deep Neural Network Inference tasks. It was built specifically for this task.

One of the advantages of FPGAs is that the hardware implementation can be tailored precisely to the needs of the software application, right down to the data path and register lengths. This allows opportunities to achieve performance improvements by tailoring the processing pipeline and parallelism exactly to the needs of the given application.

This is one of the big advantages of reconfigurable computing based on FPGA technology. The power and reconfigurability, and even the ability to tailor FPGA designs to specific needs, will play a key part in addressing the challenges of improving performance for various software applications as we move beyond Moore’s law.

If you would like to dive deeper into Hennessey and Patterson’s argument, I can recommend this article and video:

www.eejournal.com/article/fifty-or-sixty-years-of-processor-developmentfor-this

www.youtube.com/watch?v=bfPV4x-HrUI

Back To Top