Friday, December 16, 2011

InfiniBand for the Home in Less Than $150 (10Gb Networking on the Cheap)


I came across Dave Hunt's Blog - http://davidhunt.ie/wp/?p=232, talking on using older generation of InfiniBand (10Gb) for the home. Think about your home entertainment system, Blu-ray streaming from a storage box to your PC and other usage models…

When it comes to Ethernet and 10Gbs, the price is in the sky, but you can get the same speed with InfiniBand for pennies.  Dave built a system that passes over 700MB/sec throughput between his PCs at home for under $150! As Dave says, that’s like a full CD’s worth of data every second…

From Dave blog – “So, I now have an InfiniBand Fabric working at home, with over 7 gigabit throughput between PCs. The stuff of high-end datacenters in my back room. The main thing is that you don’t need a switch, so a PC to PC 10-gigabit link CAN be achieved for under $150! Here’s the breakdown: 2 x Mellanox MHEA28-XTC InfiniBand HCA’s @ $34.99 + shipping = $113 (from eBay), 1 x 3m Molex SFF-8470 InfiniBand cable include shipping = $29. Total: $142”.

Next will be for me to bring InfiniBand into my home…

PCI-Express 3.0 is Finally Here…


Long time since my last post… You know how it is – new academic year, lots of preparations, going to Supercomputing 2011 in the freezing Seattle…

One of the new technologies that I was really waiting for is PCI Express 3.0. PCI Express 2.0 was released in 2008 and is the bottleneck whenever you use faster than 20Gb/s network - such as InfiniBand. PCI Express 2.0 was released in 2008, and it is about time to get the new generation out.

While sources said that the official release of PCI Express 3.0 will be in the March 2012 time frame, the first systems based on PCI Express 3.0 (and InfiniBand FDR!!) are already out there. If you were at SC’11, or if you monitor the news from the TOP500 list, you could hear (or read) on the new systems. One of them is the Carter supercomputer in Purdue University, which is said to be the US fastest campus supercomputer. Carter systems was ranked 54th on the November TOP500.org list and was built using the latest technologies from Intel, HP and Mellanox, including not-yet-released Xenon E-5 "Sandy Bridge" Intel processors, HP Proliant servers and the already-released InfiniBand FDR.

The folks from Purdue claim that "Carter is running twice as fast as the supercomputer we were using and is using only half of the nodes. That will allow us to scale our models for better forecasts." So higher performance at a lower operational cost. Great deal…

Saturday, October 15, 2011

New MPIs Version Announced – Open MPI and MVAPICH (OSU)


This week, with one day difference, new MPI versions of Open MPI and MVAPICH were announced. On Thursday, the Open MPI team announced the release of Open MPI version 1.4.4. This release is mainly a bug fix release over the previous v1.4.3 release. The team strongly recommends that all users upgrade to version 1.4.4 if possible.

A Day after, On Friday, Prof. Dhabaleswar Panda from Ohio State University announced on behalf of the MVAPICH team the release of MVAPICH2-1.7 and OSU Micro-Benchmarks (OMB) 3.4. You can check both Open MPI and OSU websites for details on the new releases.

As part of the release, Prof. Dhabaleswar Panda has provided some performance results, and according to him, MVAPICH2 1.7 is being made available with OFED 1.5.4 and it continues to deliver excellent performance. OpenFabrics/Gen2 on Westmere quad-core (2.53 GHz) with PCIe-Gen2 and Mellanox ConnectX2-QDR (Two-sided Operations) provides 1.64 microsec one-way latency (4 bytes), 3394 MB/sec unidirectional bandwidth and 6537 MB/sec bidirectional bandwidth. QLogic InfiniPath Support on Westmere quad-core (2.53 GHz) with PCIe-Gen2 and QLogic-QDR (Two-sided Operations) provides 1.70 microsec one-way latency (4 bytes), 3265 MB/sec unidirectional bandwidth and 4228 MB/sec bidirectional bandwidth. Prof. Dhabaleswar Panda results clearly indicate that if you go with InfiniBand, Mellanox ConnectX-2 provides lower latency and much higher throughput. Clearly the performance winner.

Thursday, October 13, 2011

Fibre Channel at Dead End, Better to Invest in Other Technologies


One of the most used solutions for storage connectivity is Fibre Chanel. Personally for high performance computing I prefer to use Lustre, but in my organization you can find enterprise class systems with enterprise storage networks, which were mainly Fibre Channel based.

As we evaluate every technology before we acquire new systems, I recently reviewed the options for an enterprise class storage solution. A quick history background: While Fibre Channel was created back than for general usage, it has become a storage networking solution. Fibre Channel is standardized in the T11 Technical Committee of the InterNational Committee for Information Technology Standards (INCITS).

In the early 2000s, Fibre Channel speed was at 2Gb/s, Ethernet was just getting into 1Gb/s and InfiniBand was not there. Having the storage capacity at 2x the communication capacity was a good reason for the Fibre Channel adoption. In the mid 2000s, Fibre Channel was at 4Gb/s, Ethernet at 1GigE and InfiniBand was moving into 20Gb/s. Nowadays, Fibre Channel is at 8Gb/s, Ethernet at 10Gb/s and InfiniBand at 56Gb/s. The next speed bump of Fibre Channel is 16Gb/s, Ethernet 40Gb/s and InfiniBand 100Gb/s. With this faster evolution of Ethernet and InfiniBand compared to Fibre Channel, there is clearly no reason to use Fibre Channel anymore and any investment in it for future deployments is clearly a mistake.

If we consider latency as another factor, the appearance of SSDs does not help Fibre Channel as well. The latency benefit of SSD is eliminated when Fibre Channel is being used, and the only options for SSD based storage are Ethernet or InfiniBand.

FCoE (Fibre Channel over Ethernet) is not the solution, and it seems to be just another try of the Fibre Channel vendors to extend the life time of the dying Fibre Channel storage. There is no reason to continue and use Fibre Channel. You better invest in getting Ethernet (iSCSI for example) or InfiniBand storage for your next system – higher throughput, lower latency, and better economics. I also believe that iSCSI deployments are on the increase while Fibre Channel on the decrease. Yet another proof point.

Thursday, October 6, 2011

9 Days Left For The 38th TOP500 Supercomputers List Submission

The TOP500 supercomputers list ranks the 500 most powerful known computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year – in June and November of each year (ISC and SC conferences). The TOP500 project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on the Linpack benchmark. The TOP500 list is compiled by Hans Meuer of the University of Mannheim, Germany, Jack Dongarra of the University of Tennessee, Knoxville, and Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory.

The next TOP500 list, the 38th list, will be published at the coming SC’11 conference and the deadline for submissions is Oct 15th. 9 days are left for organizations around the world to submit their new systems. The estimated entry level performance for the 38th list is around 52TFlops. The current #1 system on the TOP500 list is the Fujitsu K-computer with 8Petaflops, and it might be still the #1 on the 38th list as well.

If you want to try and predict the coming TOP500 list results, and maybe win an iPAD2, check out http://top500.delphit.com/. Good luck!

Sunday, September 25, 2011

Tier Network is Dead, Long Live the Flat Network


An interesting article was published few days ago on The Register by Timothy Prickett Morgan, talking on the traditional three-tier networks versus flat networks. While the topic is not new, it seems that more datacenter networks now adopt the high-performance architectures and go flat.

The three-tier network is the traditional hierarchical datacenter network and it was never fast enough, nor cost effective enough, but was championed by Cisco and therefore used in many places. The HPC side of the world went for the flat networks, which delivers lower latency, higher utilization and of course, is much more cost effective. Many of the new “web 2.0” applications today adopt the HPC concepts of parallelism and suffer from low performance in the three-tier network. Going to flat network is the ideal situation for them.

It was funny the read that Blade Network (now IBM) agrees that the datacenter application are going the HPC way, but their company vision is to continue with the three-tier concept… well… in my eyes this is the right decision if you want to stay behind… Flat network, such as CLOS network, is the better way for any large scale system, both for performance and for cost.