Update sharded install to latest fairscale release, add reasoning why fork required for sequential parallelism (#5380)

(cherry picked from commit ee8373110a)
This commit is contained in:
Sean Naren 2021-01-06 15:02:13 +00:00 committed by Jirka Borovec
parent 5f3372871a
commit edfd6b2c93
1 changed files with 2 additions and 2 deletions

View File

@ -654,7 +654,7 @@ To use Sharded Training, you need to first install FairScale using the command b
.. code-block:: bash .. code-block:: bash
pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip pip install fairscale
.. code-block:: python .. code-block:: python
@ -681,7 +681,7 @@ Reference: https://arxiv.org/abs/1811.06965
.. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6. .. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6.
To get started, install FairScale using the command below. To get started, install FairScale using the command below. We install a specific branch which contains PyTorch related fixes for Sequential Parallelism.
.. code-block:: bash .. code-block:: bash