Monday, August 1, 2011

OpenMP – Overview and Update – what about MPI?

The OpenMP standard specification started in the spring of 1997, led by the OpenMP Architecture Review Board (ARB) which included: Compaq / Digital, Hewlett-Packard, Intel, IBM, Kuck & Associates, SGI, Sun, and the U.S. Department of Energy ASCI program. OpenMP is an API (Application Program Interface) which provides a model for developers of shared memory parallel applications. The API supports C/C++ and Fortran on multiple architectures and includes various constructs and directives for specifying parallel regions, work sharing, synchronization and data environment.

OpenMP is basically an implementation of multithreading, using the concept of share memory on a single machine (server for example). In a large cluster environment, one option is to use OpenMP within the node for the intra-node communications (communications between processes on the same server) and MPI for the inter-node communications (communications between processes on remote servers). You can also chose the approach of MPI everywhere – using MPI for intra and inter node communications. I have yet to see a clear direction on which is better for intra node communications – OpenMP or MPI. Sometimes OpenMP is better, sometime MPI is on the lead. There are various publications out there that explored the performance differences for specific cases. I typically use MPI everywhere, but some of my colleagues prefer to use OpenMP in the box.

Last month the OpenMP Architecture Review Board announces the release of version 3.1 of the OpenMP specification. The new release includes several new features: addition of predefined min and max reduction operators for C and C++, extensions to the atomic construct, extensions to bind threads to a processor and optimization to the OpenMP tasking model. The complete 3.1 specification can be downloaded from the OpenMP.org.

No comments:

Post a Comment