Wednesday, May 2, 2012

Amazon’s HPC cloud: not for HPC!


I came across an interesting article on Amazon HPC Cloud. As was reported recently, Cycle Computing built a 50,000-core Amazon cluster for Schrödinger, which makes simulation software for use in pharmaceutical and biotechnology research. Amazon and Cycle Computing made lot of noise around the HPC capability of EC2, how great it is for HPC applications and that Schrödinger is a great partner. 

When Schrödinger actually tried to run their simulations on the cloud, the results were not that great. Amazon EC2 architecture slows down HPC applications that require decent amount of communication between the servers, even at small scale. Schrödinger President Ramy Farid mentioned that they have successfully run parallel jobs on Amazon eight-core boxes, but when we tried anything more than that, they got terrible performance. Farid was using Amazon’s eight-core server instances, so running a job on 16 cores simultaneously required two eight-core machines. “Slow interconnect speeds between separate machines does become a serious issue”.
It is known that Amazon EC2 is not a good place for HPC application, and it will not change unless they actually build the right solution. But instead of the taking the right steps, Cycle Computing CEO Jason Stowe decided that the application is the fault…. and the application tested is an example for only 1% of the HPC applications and Amazon care for the other 99%. Jason, wake up! The applications tested are a good indication for many other HPC applications. Don’t blame the application or the user, blame yourself for building a lame solution. Deepak Singh, Amazon’s principal product manager for EC2 had also smart things to say - “We’re interested in figuring out from our customers what they want to run, and then deliver those capabilities to them. There are certain specialized applications that require very specialized hardware. It’s like one person running it in some secret national laboratory.” Deepak, you need to wake up too and to stop with these marketing responses. If you want to host HPC applications, don’t call every example a “secret national laboratory” and stop calling standard solutions that everyone can buy from any server manufacture, such as InfiniBand, “a very specialized hardware”.

Amazon is clearly not connected to its users, or potential users, and until they do try to understand what we need, the best thing is to avoid them. There are much better solutions for HPC clouds out there.

Sunday, February 19, 2012

My Pick for the Best High-Performance Computing Conferences March-May 2012

There are many HPC related conferences and workshops one can choose to attend. Therefore, a conference that covers multiple topics and combines real hands-on with technical sessions is the one I prefer to take part and attend. I have tried to find the best HPC conferences in the next 3 months:

March: The HPC Advisory Council Switzerland Conference, March 13-15 (http://www.hpcadvisorycouncil.com/events/2012/Switzerland-Workshop/index.php). The location is in the beautiful city of Lugano, but more important is the agenda of course… the conference will cover all the major developments, and will include hands-on sessions. Definitely worth the travel!

April: Two options to pick - 2012 High Performance Computing Linux for Wall Street in New York or the IDC HPC User Forum in Richmond, VA. It is more sessions and opinions rather than technical focused. Both are decent options if you have time…

May: Two options again – the NVIDIA GPU Technology Conference in San Jose, CA or the IEEE International Parallel & Distributed Processing Symposium (IPDPS) in Shanghai, China. IPDPS is of course more technical and covers more subjects – so the better option.

June: The International Supercomputing Conference in Germany, no doubt…

Wednesday, February 15, 2012

New MPIs released – Open MPI 1.4.5 and MVAPICH2 1.8

In a very close timing, both the Open MPI group and the MVAPICH team released new versions of their open source MPIs. The Open MPI Team announced the release of Open MPI version 1.4.5. This release is mainly a bug fix release over the v1.4.4 release. Version 1.4.5 can be downloaded from the main Open MPI web site and it contains an improve management of the registration cache, a fix for SLURM cpus-per-task allocation, as well as some other bug fixes.

The MVAPICH team announced the release of MVAPICH2 1.8a2 and OSU Micro-Benchmarks (OMB) 3.5.1. The new features include support for collective communication from GPU buffers, non-contiguous datatype support in point-to-point and collective  communication from GPU buffers, efficient GPU-GPU transfers within a node using CUDA IPC, adjust shared-memory communication block size at runtime, enable XRC by default at configure time, new shared memory design for enhanced intra-node small message performance and SLURM integration with mpiexec.mpirun_rsh to use SLURM allocated hosts without specifying a hostfile. For downloading MVAPICH2 1.8a2, OMB 3.5.1, associated user guide,
quick start guide, and accessing the SVN, you can check http://mvapich.cse.ohio-state.edu.

Congrats for the teams for the new releases.