Virginia Tech researchers recently found that the speed at which code is executed on multicore processors can vary by as much as 10 percent depending on how the code is distributed on identical cores. "The solution to this is to dynamically map processes to the right cores," says Virginia Tech graduate student Thomas Scogland, who summarized his work at the recent SC08 supercomputing conference.
Scogland, along with fellow researchers and help from the Department of Energy's Argonne National Laboratory, developed software that helps balance chip performance more equally on every core. Scogland says that several factors contribute to uneven chip performance. One factor is how the CPU hardware manages interrupts, which could be directed to a single core, slowing other applications on that core. However, if interrupts were distributed on multiple cores, there is no guarantee that the core handling each interrupt will be the same one running the program that interrupt is intended for, requiring additional communication time between cores. Memory also is a factor. On some processors, each core gets its own cache. While that approach increases data-fetching rates if the data is in the core's cache, it will increase retrieval time if the data is in another core's cache. Multiple cores also can create situations in which data is blocked from one core while being used on another. Programming also affects performance. The researchers developed the Systems Mapping Manager (SyMMer), a prototype performance management library that uses heuristic tools to identify distribution problems. SyMMer improved the run times of scientific applications by 10 percent to 15 percent.
From Government Computer News
View Full Article
No entries found