Sunday, February 19, 2012

My Pick for the Best High-Performance Computing Conferences March-May 2012

There are many HPC related conferences and workshops one can choose to attend. Therefore, a conference that covers multiple topics and combines real hands-on with technical sessions is the one I prefer to take part and attend. I have tried to find the best HPC conferences in the next 3 months:

March: The HPC Advisory Council Switzerland Conference, March 13-15 (http://www.hpcadvisorycouncil.com/events/2012/Switzerland-Workshop/index.php). The location is in the beautiful city of Lugano, but more important is the agenda of course… the conference will cover all the major developments, and will include hands-on sessions. Definitely worth the travel!

April: Two options to pick - 2012 High Performance Computing Linux for Wall Street in New York or the IDC HPC User Forum in Richmond, VA. It is more sessions and opinions rather than technical focused. Both are decent options if you have time…

May: Two options again – the NVIDIA GPU Technology Conference in San Jose, CA or the IEEE International Parallel & Distributed Processing Symposium (IPDPS) in Shanghai, China. IPDPS is of course more technical and covers more subjects – so the better option.

June: The International Supercomputing Conference in Germany, no doubt…

Wednesday, February 15, 2012

New MPIs released – Open MPI 1.4.5 and MVAPICH2 1.8

In a very close timing, both the Open MPI group and the MVAPICH team released new versions of their open source MPIs. The Open MPI Team announced the release of Open MPI version 1.4.5. This release is mainly a bug fix release over the v1.4.4 release. Version 1.4.5 can be downloaded from the main Open MPI web site and it contains an improve management of the registration cache, a fix for SLURM cpus-per-task allocation, as well as some other bug fixes.

The MVAPICH team announced the release of MVAPICH2 1.8a2 and OSU Micro-Benchmarks (OMB) 3.5.1. The new features include support for collective communication from GPU buffers, non-contiguous datatype support in point-to-point and collective  communication from GPU buffers, efficient GPU-GPU transfers within a node using CUDA IPC, adjust shared-memory communication block size at runtime, enable XRC by default at configure time, new shared memory design for enhanced intra-node small message performance and SLURM integration with mpiexec.mpirun_rsh to use SLURM allocated hosts without specifying a hostfile. For downloading MVAPICH2 1.8a2, OMB 3.5.1, associated user guide,
quick start guide, and accessing the SVN, you can check http://mvapich.cse.ohio-state.edu.

Congrats for the teams for the new releases.