Wednesday, August 20, 2014

GPUDirect RDMA in Action

Last year I wrote about the release of the GPUDirect RDMA technology. Simply saying, this is the technology that enables direct communications between GPUs over the network (RDMA capable network) which translates into much higher performance for applications using GPUs – high performance applications, data analytics, gaming  etc. basically any application that run over more than a single GPU. If in the past every data movement from the GPU had to go through the CPU memory, with GPUDirect RDMA it is not the case anymore. The data will go directly from the GPU memory to the RDMA capable network (for example InfiniBand) – data latency is being reduced by more than 70%, data throughput is being increased by 5-6X and the CPU bottleneck is being eliminated.


The University of Cambridge and the HPC Advisory Council have released performance information of GPUDirect RDMA with one of their application on a nearly 100 servers system. The application is HOOMD-blue - a general-purpose molecular dynamics simulation created by the university of Michigan, that can be used over GPUs. Each server includes two GPUs and two InfiniBand RDMA adapters – so each GPU can connect directly to the network instead of going through the CPU and the QPI interface (a pair of GPU and network adapter is located on the same PCI-Express root complex). Bottom line, the GPUDirect RDMA technology enabled Cambridge to increase HOOMD-blue performance by 2X over the given system. Same system, same hardware, setting on GPUDirect RDMA, twice the performance….  Impressive.



No comments:

Post a Comment