The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI members include: ZIH, Cisco, University of Houston, HLRS, IBM, University of Tennessee, INRIA, Los Alamos National Laboratory, Mellanox Technologies, oak Ridge National Laboratory, Indiana University, Oracle and Sandia National Laboratory.
Open MPI features implemented or in short-term development include: full MPI-2 standards conformance, thread safety and concurrency, dynamic process spawning, network and process fault tolerance, support network heterogeneity, single library supports all networks and run-time instrumentation. Open MPI is being distributes with open source license based on the BSD license. The latest release of Open MPI is version 1.5.3. The 1.5.x series is the "feature development" series for Open MPI. The 1.4.x series is the more mature version. You can download Open MPI from http://www.open-mpi.org/.
Ohio State University (OSU) MPI project is lead by Network-Based Computing Laboratory (NBCL) of the Ohio State University. MVAPICH/MVAPICH2 (OSU MPI) software delivers MPI solution for InfiniBand, 10GigE/iWARP and RoCE networking technologies. According to OSU, the software is being used by more than 1,630 organizations world-wide in 63 countries (current users).
OSU MPI project is currently supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cisco Systems, Intel, Mellanox Technologies, QLogic, and Oracle. OSU MPI is distributed as open source under the BSD license.
The latest release of OSU MPI-2 is MVAPICH2 1.7RC1 (includes MPICH2 1.4). MVAPICH2 1.7RC1 provides many features including Nemesis-based interface, shared memory interface, flexible rail binding with processes for multirail configurations, message coalescing, support for large data transfers (greater than 2GB), dynamic process migration, fast process-level fault-tolerance with checkpoint-restart, fast job-pause-migration-resume framework for pro-active fault-tolerance, network-level fault-tolerance with Automatic Path Migration (APM) and many more. You can find more details at http://mvapich.cse.ohio-state.edu/.
When it comes to decide which MPI is better, there is no clear answer. I have tested multiple applications and came to a conclusion that there is no clear winner. In some cases Open MPI was on the top, and in some other cases OSU MPI provided higher performance. While the two MPI provide similar low level performance (latency, bandwidth), there are differences in the collective communications performance and the feature set. As different applications uses collectives differently, for some cases OSU MPI is the better option, and for other cases Open MPI. My recommendation is to have both, and to use the appropriate one per given application. Both are free...