People some time asks me what high-performance computing is and how it is different from other types of computing such as desktop, handhold or cloud. In this series called HPC Rulebook, I will attempt to summerize a few unique and possibly defining characteristics of high-performance computing.
The first one is called Scaled Speedup.
High performance computing is accomplished by a scalable architecture to speed up computation. The problem is broken down into numerous digestable chunks and dispatched to dozens to thousands of work-horses (compute nodes) to compute and then the outcome aggregated into result. The speedup is a measurement of scalability by comparing the run time on many versus single system. In a well-architected system (balanced CPU, network IO and memory), a well-developed parallel application can scale to large number of nodes.
In real-life scenario, the computing problem is often fixed so the speedup eventually will be limited as there are just so much data to be computed and the amount of time it will take to divvy up the task will overrun the extra machines being added to the system.
However, in the frontier of science, research problem shouldn't be fixed so a new way of thinking is necessary to fully explore and extend the power of HPC. Therefore there is the notion of scaled speedup in which the problem set can also scale along with the computing power so a much larger problem set can be completed in a fixed amount of time. This scaled speedup is also known as the "Gustafson's Law".
No comments:
Post a Comment