The world’s tallest mountain (Everest), the biggest desert (Sahara), the richest person (Bill Gates), the fastest airplane (SR-71) and even the world’s champion hotdog eater (59 hotdogs) – we are fascinated by records and rankings. The good folks who operate the Guinness World Records do a brisk business chronicling our interests in the sometimes unusual aspects of human endeavors and their rankings.
At this juncture, you might be questioning the relationship between hotdog eating contests and the nominal topic of these essays: high-performance computing (HPC). Or, you may simply be, as I am, dumfounded and awed that any human could eat 59 hotdogs. This is a prodigious feat of perhaps questionable value, but I digress.
The latest, semi-annual Top 500 ranking of the world’s fastest supercomputers will be revealed at the International Supercomputing Conference (ISC) in June. Last year, two systems broke the petascale barrier using a GPU/game accelerator processor cluster (LANL’s Roadrunner) and a cluster of commodity microprocessors (ORNL’s Jaguar). As always, we can expect the latest announcement to garner interest among the technological community, receive coverage in the popular press, and secure bragging rights for the organizations, vendors and countries involved. I await them eagerly myself.
However, there are many figures of merit for high-performance computing systems, including suitability for target workloads, total cost of ownership (TCO), energy consumption and efficiency, reliability, productivity and ease of use, the richness of available software tools, extensibility and replication across markets, funding models and market viability. Many of these are difficult to quantify, and any multivariate ranking based on these may well differ from that derived from performance on a single technical computing benchmark.
Though rankings (total orders) are intuitive and easily explained – a valuable attribute in today’s attention constrained society – they rarely capture the true complexity of multidimensional comparison. Mathematically, this is simply an argument for order theory and partially ordered sets (posets), recognizing that in a multivariate (multidimensional) comparison, some elements may well be unordered or equivalent.
Is an inexpensive, energy efficient computer system superior or inferior to an expensive, high-performance system? The answer, of course, depends on the intended use. A smart phone and a supercomputer both have value, but one is a poor substitute for the other.
As valuable as ranking the world’s fastest machines is, I believe we would benefit even more from publishing a vector of metrics regarding each HPC system, and focusing less on the extremal points of the poset. Many years ago, the Perfect Club (PERFormance Evaluation by Cost-effective Transformations) benchmarks were created to facilitate one variant of such a multivariate analysis. The SPEC benchmarks are another example from the commercial space.
These multivariate analyses are not easy. They require much more work than univariate rankings, some of the data is not easily obtained, and some information is viewed as competitive. That does not mean we should not try again to define more diverse evaluation criteria.
Despite fascination with ultrafast computing systems, the mind cannot help but return to hot dog eating. Incredibly, not one, but two individuals ate 59 hotdogs during the allotted time during this year’s contest. This necessitated a 5 hotdog “eat off” to determine a winner. As a computer scientist, I recognize sixty-four as an interesting power of two. Hotdogs and supercomputers, both are driven by human competitiveness.
No entries found