Update sharded install to latest fairscale release, add reasoning why fork required for sequential parallelism (#5380)
(cherry picked from commit ee8373110a
)
This commit is contained in:
parent
5f3372871a
commit
edfd6b2c93
|
@ -654,7 +654,7 @@ To use Sharded Training, you need to first install FairScale using the command b
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip
|
pip install fairscale
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
@ -681,7 +681,7 @@ Reference: https://arxiv.org/abs/1811.06965
|
||||||
|
|
||||||
.. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6.
|
.. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6.
|
||||||
|
|
||||||
To get started, install FairScale using the command below.
|
To get started, install FairScale using the command below. We install a specific branch which contains PyTorch related fixes for Sequential Parallelism.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue