GPUDirect
RDMA is the newest technology for GPU to GPU communications over the InfiniBand
interconnect. GPUDirect RDMA enables a direct data transfer from the GPU memory
over the InfiniBand network via PCI Pier-to-Pier (P2P). This capability introduced
in the NVIDIA Kepler-class GPUs, CUDA 5.0 and the Mellanox InfiniBand
solutions.
The
importance of this capability is with bypassing the CPU for GPU communications
(who needs the CPU…..), therefore a dramatic increase in performance. Finally
after long time of waiting, the two companies mentioned above have demonstrated
the new capability in the recent ISC’13 conference. Prof. Dhabaleswar K. (DK)
Panda, Hari Subramoni and Sreeram Potluri from the Ohio State University
presented at the HPC Advisory Council their first results with the GPU Direct
RDMA – 70% reduction in latency! You can see the entire presentation at http://www.hpcadvisorycouncil.com/events/2013/European-Workshop/presentations/9_OSU.pdf.
Seems that GE Intelligent Platforms already using the new technology - http://www.militaryaerospace.com/whitepapers/2013/03/gpudirect_-rdma.html,
which is a great example of how the new capability can make our life better (or
faster…). You can also read more on http://docs.nvidia.com/cuda/gpudirect-rdma/index.html.
In the graph: latency improvement presented by DK Panda
The presentation video is available to watch as well: http://insidehpc.com/2013/07/02/video-mvapich2-and-gpudirect-rdma/
ReplyDelete