From edfd6b2c931bed57b6aaa2251a57522115d89f72 Mon Sep 17 00:00:00 2001 From: Sean Naren Date: Wed, 6 Jan 2021 15:02:13 +0000 Subject: [PATCH] Update sharded install to latest fairscale release, add reasoning why fork required for sequential parallelism (#5380) (cherry picked from commit ee8373110aa89f1049d7ac53c5d491e7eba68cf1) --- docs/source/multi_gpu.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/multi_gpu.rst b/docs/source/multi_gpu.rst index b822d25d6b..4a67460057 100644 --- a/docs/source/multi_gpu.rst +++ b/docs/source/multi_gpu.rst @@ -654,7 +654,7 @@ To use Sharded Training, you need to first install FairScale using the command b .. code-block:: bash - pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip + pip install fairscale .. code-block:: python @@ -681,7 +681,7 @@ Reference: https://arxiv.org/abs/1811.06965 .. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6. -To get started, install FairScale using the command below. +To get started, install FairScale using the command below. We install a specific branch which contains PyTorch related fixes for Sequential Parallelism. .. code-block:: bash