Tuesday, April 19, 2016

HPC interconnects: InfiniBand versus Intel Omni-Path

This is a hot topic nowadays, definitely in the HPC world. The leading InfiniBand solution versus the new proprietary product from Intel - Omni-Path. One can find massive information on what Omni-Path is (well, it was already announced at the 2015 supercomputing conference) but interesting enough, Intel did not release any applications performance so far. You can find some low level benchmarks such as network latency and bandwidth, but those numbers are actually similar between the two solutions, and actually do not reflect the main architectural differences between InfiniBand and Omni-Path. Just few days ago, Mellanox published an interesting article covering in details the main differences between InfiniBand and Omni-Path, and for the first time release some application performance comparisons.  Not surprising (still, Omni-Path is based on the QLogic True-Scale architecture) I guess, InfiniBand shows much higher applications performance even on a small cluster size - getting to 60% of performance advantage. It is an interesting article to read, and seems that InfiniBand will remain the top solution for HPC systems.
You can find the article at - http://www.hpcwire.com/2016/04/12/interconnect-offloading-versus-onloading/.