You are here

Software multicore

Petascale Era Will Force Software Rethink....................7.0K
by Jim Sexton Lead for IBM Blue Gene Applications

As we enter the petascale era, there will be a number of challenges to
overcome before applications can truly take advantage of the enormous
computational power that is coming available. One of the most pressing
of these challenges will be to design software programs that map well
to petascale architectures to allow the community to solve previously
unattainable scientific and business problems.

For the last 20 years, performance improvements have been delivered by
increasing processor frequencies. In the petascale era, processor
frequencies will no longer increase due to fundamental atomic limits
in our ability to shrink features on Silicon. Moore's Law will
continue, but performance increases will now come through parallelism
and petascale systems will deliver performance by deploying hundreds
of thousands of individual processor cores. Multiple cores will be
assembled into individual chips and tens of thousands of chips will
then be assembled to deliver the petascale performance which Moore's
law predicts to arrive in the next few years.

Programming approaches for multicore chips and parallel multicore
systems are well understood. The programming challenge which arises
however is very complex. When developing code for a single processor,
a programmer is able to focus on the algorithms, and can, to first
approximation, ignore the system architecture issues during program
design. Compilers for single processor programming are well developed
and mature and do a very good job at mapping a program properly to the
system architecture on which that program is designed to run.

When programming for a parallel multicore process architecture, a
programmer is forced to manage algorithmic and systems architectures
together. The parallel system architecture requires that a programmer
decide how to distribute data and work among the parallel processing
elements in the architecture, at that same time as the algorithm is
being designed. The parallel programmer needs to make many critical
decisions which have huge impact on program performance and capability
all through the design process. These decisions include items such as
how many chips and cores will be required, how will data be
distributed and moved across these elements, and how will work be
distributed. On parallel systems, programming has changed from being a
routine technical effort to being a creative art form.

The opportunity provided by leveraging these big parallel machines is
enormous. It will be possible to answer some really hard questions in
complex systems in all spheres of human activities. Examples include a
better understanding of the processes that drive global warming,
insight into how the world wide economy functions, and a full
understanding of the chemical and biological processes that occur
within the human body. Right now, we have the computing power to
address these questions. We just don't have programs because they are
so complex and so difficult to develop, test and validate.

On average, it takes two to four years to develop a programming code
to simulate just one human protein. The challenge the scientific
community now faces is finding the people who understand how to write
complex programs for petascale architectures. There is an obvious
Catch-22 involved: Until more of these programs start running on
parallel machines and show results, it will be hard to justify the
investment needed to fund the building of a whole infrastructure from
scratch. This may include PhD programs at universities, recruitment of
specialists, and the build-up of resources.

Although a major shift to parallelism is beginning, there is a high
cost of entry. Right now, parallelism is in the early adopter phase.
Before it shifts to the mainstream/commercial phase, the community
will need to see a clear cost/benefit before it brings everyone along.
In order to advance this effort in the U.S., the Scientific Discovery
Advanced Computing Discovery (SciDAC) program is establishing nine
Centers for Enabling Technologies
to focus on specific challenges in petascale computing. These
multidisciplinary teams are led by national laboratories and
universities and focus on meeting the specific needs of SciDAC
applications for researchers as they move toward petascale computing.
These centers will specialize in applied mathematics, computer
science, distributed computing and visualization, and will be closely
tied to specific science application teams.

In addition to scientific questions, industry applications could help
drive the development of the code and lead to mainstream adoption. One
example is the energy and oil/petroleum industry. petascale computing
may improve petroleum reserve management, nuclear reactor design, and
nuclear fuel reprocessing. Another is the weather. As we need more
precise, short-term weather prediction, microclimate modeling comes
into play.

In the past, the computer science community tended to focus on the
hardware and system software, but left the development of applications
to others. The trend now is that programmers need to develop
applications so that they are tightly coupled to the systems they will
run on. One needs to design the program for the system. That's been
the anathema for many years.

-----

About the Author

Jim Sexton is the lead for Blue Gene Applications at IBM's IBM T. J.
Watson Research Center in Yorktown Heights, NY. He received his Ph.D.
in theoretical physics from Columbia University. He was a Research
Fellow at Fermi National Accelerator Laboratory, then at the Institute
for Advanced Study at Princeton University. Before joining the staff
at the T. J. Watson Research Center, the was a professor at Trinity
College in Dublin. His areas of interest include high performance
computing, systems architectures, HPC systems software, theoretical
physics and high energy theoretical physics.

Forums: