The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI members include: ZIH, Cisco, University of Houston, HLRS, IBM, University of Tennessee, INRIA, Los Alamos National Laboratory, Mellanox Technologies, oak Ridge National Laboratory, Indiana University, Oracle and Sandia National Laboratory.
Open MPI features implemented or in short-term development include: full MPI-2 standards conformance, thread safety and concurrency, dynamic process spawning, network and process fault tolerance, support network heterogeneity, single library supports all networks and run-time instrumentation. Open MPI is being distributes with open source license based on the BSD license. The latest release of Open MPI is version 1.5.3. The 1.5.x series is the "feature development" series for Open MPI. The 1.4.x series is the more mature version. You can download Open MPI from http://www.open-mpi.org/.
Ohio State University (OSU) MPI project is lead by Network-Based Computing Laboratory (NBCL) of the Ohio State University. MVAPICH/MVAPICH2 (OSU MPI) software delivers MPI solution for InfiniBand, 10GigE/iWARP and RoCE networking technologies. According to OSU, the software is being used by more than 1,630 organizations world-wide in 63 countries (current users).
OSU MPI project is currently supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cisco Systems, Intel, Mellanox Technologies, QLogic, and Oracle. OSU MPI is distributed as open source under the BSD license.
The latest release of OSU MPI-2 is MVAPICH2 1.7RC1 (includes MPICH2 1.4). MVAPICH2 1.7RC1 provides many features including Nemesis-based interface, shared memory interface, flexible rail binding with processes for multirail configurations, message coalescing, support for large data transfers (greater than 2GB), dynamic process migration, fast process-level fault-tolerance with checkpoint-restart, fast job-pause-migration-resume framework for pro-active fault-tolerance, network-level fault-tolerance with Automatic Path Migration (APM) and many more. You can find more details at http://mvapich.cse.ohio-state.edu/.
When it comes to decide which MPI is better, there is no clear answer. I have tested multiple applications and came to a conclusion that there is no clear winner. In some cases Open MPI was on the top, and in some other cases OSU MPI provided higher performance. While the two MPI provide similar low level performance (latency, bandwidth), there are differences in the collective communications performance and the feature set. As different applications uses collectives differently, for some cases OSU MPI is the better option, and for other cases Open MPI. My recommendation is to have both, and to use the appropriate one per given application. Both are free...
I completely agree (even though I'm one of the Open MPI core developers!).
ReplyDeleteHaving multiple MPI implementations is both painful and good. It's painful for users because they need to compile / link / try multiple MPI implementations. This can be annoying for users who just want to get their work done. I've seen HPC clusters with 20+ MPI implementations installed. Yikes!
But it's good because MPI implementations are large, complex beasts. One implementation may have chosen to optimize something that your application needs (that another implementation did not optimize). Or there may be a bug in one implementation that prevents your application from running. With multiple implementations available, you can more-or-less easily switch to another implementation and hopefully be up and running.
Multiple competing projects also means that we MPI implementors compete with each other. We compete on many levels: for funding, for users, for features, for performance, ...etc. It keeps pushing the MPI implementation quality bar higher and higher, which is definitely good for users.
Just my $0.02. :-)