YOU ARE AT:WirelessAs Moore’s Law ends, hardware acceleration takes center stage: Part 2 (Reality...

As Moore’s Law ends, hardware acceleration takes center stage: Part 2 (Reality Check)

 

The first half of this article detailed the decline and demise of Moore’s Law and the adjustments to assumptions and technology that had to be made as a result. Cloud providers were the first to notice this decline and began experimenting with alternative methods to boost performance, including GPUs, NPUs and proprietary ASIC chips.

The second half introduces an older technology, FPGA, as a solution to the need for hardware acceleration. It is highly configurable, its use is widespread and its proven capability for a variety of acceleration use cases could be the answer to the problems arising from the demise of Moore’s Law.

The Rise of FPGAs

One of the solutions to the end of Moore’s Law is a technology that’s older than this prediction. Field Programmable Gate Arrays (FPGAs) have traditionally been used as an intermediary step in the design of Application Specific Integrated Circuit (ASIC) semiconductor chips. The advantage of FPGAs is that the same tools and languages are used as those used to design semiconductor chips, but it is possible to rewrite or reconfigure the FPGA with a new design, on the fly. The disadvantage is that FPGAs are bigger and more power-hungry than ASICs. 

Understandably, though, as ASICs became more expensive to make, it became increasingly harder to justify making them. At the same time, FPGAs became more efficient and cost-competitive. It therefore made sense to remain at the FPGA stage and release the product based on an FPGA design. Today, FPGAs are widely used in a broad range of industries, especially in networking and cybersecurity equipment, where they perform specific hardware-accelerated tasks. 

Based on FPGA success in other arenas, Microsoft Azure decided to try using FPGA-based SmartNICs in standard severs to offload compute- and data-intensive tasks from the CPU to the FPGA. Today, these FPGA-based SmartNICs are used broadly throughout Microsoft Azure’s data centers, supporting services like Bing and Microsoft 365. 

With the proof of this use case, FPGAs became a bona fide hardware acceleration option. This led to Intel purchasing Altera in 2015, the second largest producer of FPGA chips and development software, for $16 billion. Since then, several cloud companies have added FPGA technology to their service offerings, including AWS, Alibaba, Tencent and Baidu, to name a few. 

A Reconfigurable Revolution

It is possible to implement parallel processing on an FPGA, but it is also possible to implement other processing architectures. Indeed, one of FPGAs’ allurements is that they provide a good compromise between versatility, power, efficiency and cost. FPGAs can be used for virtually any processing task. 

Another of the benefits of FPGAs is that details like data path widths and register lengths can be tailored specifically to the needs of the application. Indeed, when designing a solution on an FPGA, it is best to have a specific use case and application in mind in order to truly exploit the power of the FPGA. 

Xilinx and Intel alone, the two largest vendors, offer a huge number of power choices for FPGAs – no to mention all the other players in the market. For example, compare the smallest FPGAs that can be used on drones for image processing, to extremely large FPGAs that can be used for machine learning and artificial intelligence. FPGAs generally provide very good performance per watt. For example, FPGA-based SmartNICs can process up to 200 Gbps of data without exceeding the power requirements on server PCIe slots. 

Organizations like the reconfigurable aspect of FPGAs because they can modify them specifically to the application. This makes it possible to create highly efficient solutions that do just what is required, when required. One of the drawbacks of generic multi-processor solutions is that there is an overhead in cost due to their universal nature. A generic processor can do many things well at the same time but will always struggle to compete with a specific processor designed to accelerate a specific task. 

Because there are so many FGPAs to choose from now, it should not be difficult to find the right model at the right price point to fit your application needs. Like any chip technology, the cost of a chip reduces dramatically with volume, and this is also the case with FPGAs. They are widely used today as an alternative to ASIC chips, providing a volume base and competitive pricing that is only set to improve over the coming years. 

A New Path Forward

We can no longer rely on the doubling of processing power ever 18 months. Thus, we need to reconsider what constitutes high-performance computing architectures, programming languages and solution design. This could even lead to, as some experts suggest, the start of a new “golden age” in computer and software architecture innovation. For now, clever minds have found ways to use “old” technologies in new ways to accelerate hardware in the service of better server performance. FPGAs are a reconfigurable, cost-effective alternative worthy of notice and experimentation.

 

Daniel Joseph Barry is VP Strategy and Market Development at Napatech and has over 25 years’ experience in the IT/Telecom industry. He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

ABOUT AUTHOR

Reality Check
Reality Checkhttps://www.rcrwireless.com
Subject to editorial review and copy edit, RCR Wireless News accepts bylined thought leadership articles, up to 1000 words, from industry executives. Submitted articles become property of RCR Wireless News. Submit articles to engageRCR@rcrwireless.com with "Reality Check" in subject line.