Getting to Exascale with Reconfigurable Dataflow Computing


Robin Bruce on 7-12 2012


Microprocessors have been hitting the limits of attainable clock frequencies for the past few years, resulting in the current multi-core processor solutions provided by the major microprocessor vendors. Multiple cores on a chip result in the need to share the same pins to get to the memory system and communication channels to other machines. This leads to a memory wall, since the number of pins per chip does not scale with the number of cores, and a power wall, since chips must still be cooled within the same physical space.

Many important applications in fields as diverse as earth science and finance exhibit significantly worse than linear scaling on multiple cores, a problem that is only going to worsen as the major microprocessor vendors move beyond quad/six-core chips to many-core architectures. At the same time, programmers must now grapple with an even more complex programming model of parallelism at the core, chip, node and cluster level.

Dataflow computing offers a way to avoid the memory and power walls. Although the concept has been around for many decades, until recently dataflow solutions have been lagging the performance of state-of-the-art supercomputers. At Maxeler we are bridging the gap between research and production quality systems, delivering complete solutions for high-performance computing applications utilizing reconfigurable dataflow engines. This talk will discuss details of our reconfigurable computing solutions, the dataflow programming model, and some example applications.

CE Tweets