As Intel just released the new CPU platform - official name Intel Xeon E5-2600 v3, code name “Haswell”, wanted to share some of our performance testing. Forr start, we tested the simple InfiniBand bandwidth and latency benchmarks one can find as part of the InfiniBand software distribution. We measured around 6.4 Giga Byte per second bandwidth and latency of close to 0.6 micro second. You can see the full graphs below. More to come J
Monday, September 15, 2014
Remote Direct Memory Access (RDMA) is the technology that that allows server-to-server data communication to go directly to the user space (aka application) memory without any CPU involvement. RDMA technology delivers faster performance for large data transfers while reducing CPU utilization or overhead. It is a technology used in many applications segments – database, storage, cloud and of course HPC. All of the MPIs include support for RDMA for the rendezvous protocol.
There are three communications standards for RDMA – InfiniBand (the de-facto solution for HPC), RoCE and iWARP. The latter two are over Ethernet. RoCE has being standardized by the IBTA organization, and iWARP by the IETF.
iWARP solutions are being sold by Intel (due to the acquisition of NetEffect) and Chelsio. RoCE solutions are being sold by Mellanox, Emulex and others. The major issues of iWARP are performance and scalability. With iWARP, the data needs to pass through multiple protocols before it can hit the wire and therefore the performance iWARP delivers is not in par with RoCE (not to mention InfiniBand). The major RoCE limitation was with support over layer 3, but this has been solved with the new specification that is about to be released for RoCE v2.
Last week Intel announced their new Ethernet NICs (“Fortville”). No iWARP support is listed for these new NICs, and this leaves Intel without RDMA capability for their Ethernet NICs. Seems that the iWARP camp is shrinking… well… there is a RoCE reason for it…
Thursday, September 4, 2014
A recent release from the Texas Advanced Computing Center (TACC) sheds light on one of the research programs that is being supported, or better say enabled, by the TACC powerful supercomputer, one of the fastest machines in the world. Using supercomputer simulations on TACC's “Lonestar” system, researchers are able to model radiation in a magnetic field, which will facilitate the safe use of the MRI-linac and enable more effective cancer treatment.
The research is being done by the MD Anderson Cancer Center in Houston. According to the team working on it, the new solution they develop unites radiation therapy and magnetic resonance imaging (MRI), allowing physicians to view the cancer tumor in real-time and in high detail during treatment. It also permits physicians to adapt the radiation treatment during the procedure, sparing healthy tissue and reducing side effects.
To develop the system, the MD team utilize the TACC supercomputer to ran complex simulations. A great use for the supercomputing power. TACC system was build using the most flexible architecture of a cluster, a combination of CPUs and co-processors and InfiniBand for the connectivity. A great example of a standard based system and an example on why there is no reason to use proprietary products for supercomputers. You can read more on TACC systems at https://www.tacc.utexas.edu/resources/hpc. I enjoy using them too.