Exploiting Hardware Multicast and GPUDirect RDMA for Efficient Broadcast

IEEE Transactions on Parallel and Distributed Systems, 2019

Ching-Hsiang Chu, Xiaoyi Lu, Ammar Awan, Hari Subramoni, Bracy Elton, Dhabaleswar K. Panda

Abstract

Broadcast is a widely used operation in many streaming and deep learning applications to disseminate large amounts of data on emerging heterogeneous High-Performance Computing (HPC) systems. However, traditional broadcast schemes do not fully utilize hardware features for Graphics Processing Unit (GPU)-based applications. In this paper, a model-oriented analysis is presented to identify performance bottlenecks of existing broadcast schemes on GPU clusters. Next, streaming-based broadcast schemes are proposed to exploit InfiniBand hardware multicast (IB-MCAST) and NVIDIA GPUDirect technology for efficient message transmission. The proposed designs are evaluated in the context of using Message Passing Interface (MPI) based benchmarks and applications. The experimental results indicate improved scalability and up to 82 percent reduction of latency compared to the state-of-the-art solutions in the benchmark-level evaluation. Furthermore, compared to the state-of-the-art, the proposed design yields stable higher throughput for a synthetic streaming workload, and 1.3x faster training time for a deep learning framework.

Journal Article

Journal
IEEE Transactions on Parallel and Distributed Systems
Volume
30
Number
3
Pages
575-588
Doi
10.1109/TPDS.2018.2867222
Issn
1558-2183
Series
TPDS '19
Month
March

Cite

Plain text

BibTeX