The OpenMP standard specification started in the spring of 1997, led by the OpenMP Architecture Review Board (ARB) which included: Compaq / Digital, Hewlett-Packard, Intel, IBM, Kuck & Associates, SGI, Sun, and the U.S. Department of Energy ASCI program. OpenMP is an API (Application Program Interface) which provides a model for developers of shared memory parallel applications. The API supports C/C++ and Fortran on multiple architectures and includes various constructs and directives for specifying parallel regions, work sharing, synchronization and data environment.
OpenMP is basically an implementation of multithreading, using the concept of share memory on a single machine (server for example). In a large cluster environment, one option is to use OpenMP within the node for the intra-node communications (communications between processes on the same server) and MPI for the inter-node communications (communications between processes on remote servers). You can also chose the approach of MPI everywhere – using MPI for intra and inter node communications. I have yet to see a clear direction on which is better for intra node communications – OpenMP or MPI. Sometimes OpenMP is better, sometime MPI is on the lead. There are various publications out there that explored the performance differences for specific cases. I typically use MPI everywhere, but some of my colleagues prefer to use OpenMP in the box.
No comments:
Post a Comment