6533b7d9fe1ef96bd126caa2
RESEARCH PRODUCT
Gossip
Daniel JüngerRobin KobusBertil SchmidtChristian Hundtsubject
CUDAComputer scienceGossipDistributed computingTransfer (computing)ServerHash functionOverhead (computing)Throughput (business)description
Nowadays, a growing number of servers and workstations feature an increasing number of GPUs. However, slow communication among GPUs can lead to poor application performance. Thus, there is a latent demand for efficient multi-GPU communication primitives on such systems. This paper focuses on the gather, scatter and all-to-all collectives, which are important operations for various algorithms including parallel sorting and distributed hashing. We present two distinct communication strategies (ring-based and flow-oriented) to generate transfer plans for their topology-aware implementation on NVLink-connected multi-GPU systems. We achieve a throughput of up to 526 GB/s for all-to-all and 148 GB/s for scatter/gather on a DGX-1 server with only a small memory overhead. Furthermore, we propose a cost-neutral alternative to the DGX-1 Volta topology that provides an expected higher throughput for the all-to-all collective while preserving the throughput in case of scatter/gather. Our Gossip library is freely available at https://github.com/Funatiq/gossip.
year | journal | country | edition | language |
---|---|---|---|---|
2019-08-05 | Proceedings of the 48th International Conference on Parallel Processing |