June 3, 2012

Evolution of High Performance Computing

I was at CondorWeek 2012 as part of our CCL contingent where I had the pleasure of listening to an array of fascinating talks on Distributed and High Performance Computing.

At the end of one such talk, my mind started plotting the history and evolution of High Performance Computing. Here is what I found on its evolution:

(Before I proceed, for clarity, here is the definition I use for high performance computing (HPC): HPC refers to the technologies developed for the class of programs that are too large, too resource-intensive, and too long to run on commodity computing systems, such as desktops.)

Age of the Processors:  

 


For the first twenty years of computing, it was all about the processing power available in a motherboard. Moore's Law aided and dictated the continued advances in the processing speeds of a single core CPU.

This trend continued until the rate of growth slowed and power dissipation became a bottleneck. That led to the design of architectures involving multiple processors and multiple cores on each processor.

The multi-processor systems introduced and developed parallel computing. It marked the beginning of high-performance computing where large processing-heavy programs were decomposed into parallel pieces to take advantage of multi-processor based systems.

 

Age of the Clusters:



The need for higher computational power soon began surpassing the available processing speeds and the projected rate of their improvement.

The introduction and quick adoption of computer networking provided a much needed breakthrough for HPC. Higher computational capacity and power were then achieved by connecting multiple dedicated computing systems together. These connected systems were first called batch processing systems and served as the predecessors to cluster computing.

As networking technologies advanced, cluster computing involving hundreds of sophisticated and dedicated computing systems became prevalent for running large, long running programs.

Age of the Grids & Clouds:

 

 
Over the last two decades networking speeds and bandwidth have continued to outpace the advancements in processing and disk storage speeds. This led to wide area networks connecting thousands of computing systems spread over several geographic regions.

These wide area networks consisted either of (a) dedicated computing systems housed in multiple data centers or (b) multi-purpose shared systems whose idle cycles were harvested and made available for consumption (an idea championed by the Condor project at the University of Wisconsin-Madison).

Soon, efforts began tapping into the vast aggregated processing and storage capacity available in these networks. These networks had come to be treated as platforms for running HPC applications. This trend led to the emergence of clouds and grids whose resources were available to stakeholders and costumers for consumption.

Age of the Software Frameworks:


With the rate of advances in hardware slowing, software frameworks are becoming the agents of the next wave of growth in HPC.

This is so because software frameworks are best positioned to bring together and manage heterogeneous resources from a variety of environments, such as clouds, grids, and clusters, and satisfy the increasing computational and storage needs of users.

These frameworks are also better equipped to provide fault-tolerance, load-balancing, and to handle the complexities with managing several thousand heterogeneous resources.

I am currently involved with the development of one such software framework - Work Queue which is available as a C, Python, Perl library.

Some more examples of software frameworks used for HPC are Hadoop, Pegasus, Taverna.

This age represents the current trends and research directions in HPC.

Thoughts on the Future:


While software frameworks seem to be the path forward in the evolution of HPC, hardware advances cannot be ignored. For instance, GPU hardware are slowly gaining traction as relatively inexpensive but effective platforms for HPC as described in this paper.

All said, the future for HPC looks bright as it continues to evolve toward being more economical, powerful, and easy to deploy and run.

No comments:

Post a Comment