From 31fca1658d195bc108e531ffb5669ff17d04d086 Mon Sep 17 00:00:00 2001 From: Sean Naren Date: Fri, 9 Jul 2021 12:27:18 +0100 Subject: [PATCH] [docs] Add NCCL environment variable docs (#8345) * Add nccl env variable docs * Wording * Update docs/source/guides/speed.rst Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com> Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com> --- docs/source/guides/speed.rst | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/docs/source/guides/speed.rst b/docs/source/guides/speed.rst index ece806558c..9a444f1f96 100644 --- a/docs/source/guides/speed.rst +++ b/docs/source/guides/speed.rst @@ -90,6 +90,26 @@ This by default comes with a performance hit, and can be disabled in most cases. plugins=DDPPlugin(find_unused_parameters=False), ) +When using DDP on a multi-node cluster, set NCCL parameters +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +`NCCL `__ is the NVIDIA Collective Communications Library which is used under the hood by PyTorch to handle communication across nodes and GPUs. There are reported benefits in terms of speedups when adjusting NCCL parameters as seen in this `issue `__. In the issue we see a 30% speed improvement when training the Transformer XLM-RoBERTa and a 15% improvement in training with Detectron2. + +NCCL parameters can be adjusted via environment variables. + +.. note:: + + AWS and GCP already set default values for these on their clusters. This is typically useful for custom cluster setups. + +* `NCCL_NSOCKS_PERTHREAD `__ +* `NCCL_SOCKET_NTHREADS `__ +* `NCCL_MIN_NCHANNELS `__ + +.. code-block:: bash + + export NCCL_NSOCKS_PERTHREAD=4 + export NCCL_SOCKET_NTHREADS=2 + Dataloaders ^^^^^^^^^^^ When building your DataLoader set ``num_workers > 0`` and ``pin_memory=True`` (only for GPUs).