Couple of days ago I read an interesting blog post written by Marc Hamilton, vice president of HPC at HP. The blog addresses the subject of InfiniBand versus Ethernet and I decided to post it here as well. I am trying to avoid vendors publicity, but in this case I thought that the discussion is of interest. So here is a copy of Marc’s blog, with my little comments.
“InfiniBand networking, typically used in high end supercomputers, including nearly half of the Top500 fastest supercomputers, continues to address new markets outside of the traditional supercomputing space. Just today (Ben – July 29th), NYSE Technologies announced availability of a significant performance upgrade of its middleware platform, Data Fabric, and demonstrated a message rate of over a million 200 byte messages per second over QDR (40Gb/sec) InfiniBand. And as Data Center Knowledge reported earlier in the week (Ben – July 28th), Microsoft’s Bing Maps site is now running on an InfiniBand network.
It is easy to argue that 10Gb and new 40Gb Ethernet technologies have broader market reach than InfiniBand, and in fact the Mellanox CX2 cards used in the NYSE Technologies benchmark support both InfiniBand and 10Gb Ethernet, but InfiniBand still has a clear performance advantage today when low latency is a key requirement. Meanwhile, Mellanox and InfiniBand vendor QLogic aren’t standing still. Mellanox is already selling CX3 kit supporting 40Gb Ethernet and FDR (56Gb/s) InfiniBand, albeit to take full advantage of FDR you need to wait for new next gen PCIe-Gen3 compatible CPUs and servers to become available.
HP was an early adopter of InfiniBand technology and in fact designed InfiniBand onto the motherboard of both our ProLiant SL390s G7 server and our ProLiant BL2x220c G7 server blade. In these designs, we used the Mellanox CX2 chipset and thus both products support 40Gb QDR InfiniBand or 10Gb Ethernet without expensive add-on cards.
No comments:
Post a Comment