Saturday, October 15, 2011

New MPIs Version Announced – Open MPI and MVAPICH (OSU)

This week, with one day difference, new MPI versions of Open MPI and MVAPICH were announced. On Thursday, the Open MPI team announced the release of Open MPI version 1.4.4. This release is mainly a bug fix release over the previous v1.4.3 release. The team strongly recommends that all users upgrade to version 1.4.4 if possible.

A Day after, On Friday, Prof. Dhabaleswar Panda from Ohio State University announced on behalf of the MVAPICH team the release of MVAPICH2-1.7 and OSU Micro-Benchmarks (OMB) 3.4. You can check both Open MPI and OSU websites for details on the new releases.

As part of the release, Prof. Dhabaleswar Panda has provided some performance results, and according to him, MVAPICH2 1.7 is being made available with OFED 1.5.4 and it continues to deliver excellent performance. OpenFabrics/Gen2 on Westmere quad-core (2.53 GHz) with PCIe-Gen2 and Mellanox ConnectX2-QDR (Two-sided Operations) provides 1.64 microsec one-way latency (4 bytes), 3394 MB/sec unidirectional bandwidth and 6537 MB/sec bidirectional bandwidth. QLogic InfiniPath Support on Westmere quad-core (2.53 GHz) with PCIe-Gen2 and QLogic-QDR (Two-sided Operations) provides 1.70 microsec one-way latency (4 bytes), 3265 MB/sec unidirectional bandwidth and 4228 MB/sec bidirectional bandwidth. Prof. Dhabaleswar Panda results clearly indicate that if you go with InfiniBand, Mellanox ConnectX-2 provides lower latency and much higher throughput. Clearly the performance winner.

Thursday, October 13, 2011

Fibre Channel at Dead End, Better to Invest in Other Technologies

One of the most used solutions for storage connectivity is Fibre Chanel. Personally for high performance computing I prefer to use Lustre, but in my organization you can find enterprise class systems with enterprise storage networks, which were mainly Fibre Channel based.

As we evaluate every technology before we acquire new systems, I recently reviewed the options for an enterprise class storage solution. A quick history background: While Fibre Channel was created back than for general usage, it has become a storage networking solution. Fibre Channel is standardized in the T11 Technical Committee of the InterNational Committee for Information Technology Standards (INCITS).

In the early 2000s, Fibre Channel speed was at 2Gb/s, Ethernet was just getting into 1Gb/s and InfiniBand was not there. Having the storage capacity at 2x the communication capacity was a good reason for the Fibre Channel adoption. In the mid 2000s, Fibre Channel was at 4Gb/s, Ethernet at 1GigE and InfiniBand was moving into 20Gb/s. Nowadays, Fibre Channel is at 8Gb/s, Ethernet at 10Gb/s and InfiniBand at 56Gb/s. The next speed bump of Fibre Channel is 16Gb/s, Ethernet 40Gb/s and InfiniBand 100Gb/s. With this faster evolution of Ethernet and InfiniBand compared to Fibre Channel, there is clearly no reason to use Fibre Channel anymore and any investment in it for future deployments is clearly a mistake.

If we consider latency as another factor, the appearance of SSDs does not help Fibre Channel as well. The latency benefit of SSD is eliminated when Fibre Channel is being used, and the only options for SSD based storage are Ethernet or InfiniBand.

FCoE (Fibre Channel over Ethernet) is not the solution, and it seems to be just another try of the Fibre Channel vendors to extend the life time of the dying Fibre Channel storage. There is no reason to continue and use Fibre Channel. You better invest in getting Ethernet (iSCSI for example) or InfiniBand storage for your next system – higher throughput, lower latency, and better economics. I also believe that iSCSI deployments are on the increase while Fibre Channel on the decrease. Yet another proof point.

Thursday, October 6, 2011

9 Days Left For The 38th TOP500 Supercomputers List Submission

The TOP500 supercomputers list ranks the 500 most powerful known computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year – in June and November of each year (ISC and SC conferences). The TOP500 project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on the Linpack benchmark. The TOP500 list is compiled by Hans Meuer of the University of Mannheim, Germany, Jack Dongarra of the University of Tennessee, Knoxville, and Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory.

The next TOP500 list, the 38th list, will be published at the coming SC’11 conference and the deadline for submissions is Oct 15th. 9 days are left for organizations around the world to submit their new systems. The estimated entry level performance for the 38th list is around 52TFlops. The current #1 system on the TOP500 list is the Fujitsu K-computer with 8Petaflops, and it might be still the #1 on the 38th list as well.

If you want to try and predict the coming TOP500 list results, and maybe win an iPAD2, check out Good luck!