diff --git a/docs/source/advanced/multi_gpu.rst b/docs/source/advanced/multi_gpu.rst index b235e6a458..19e552f351 100644 --- a/docs/source/advanced/multi_gpu.rst +++ b/docs/source/advanced/multi_gpu.rst @@ -685,7 +685,7 @@ To get started, install FairScale using the command below. We install a specific .. code-block:: bash - pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip + pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.2.0.zip To use Sequential Model Parallelism, you must define a :class:`nn.Sequential ` module that defines the layers you wish to parallelize across GPUs. This should be kept within the ``sequential_module`` variable within your ``LightningModule`` like below. diff --git a/requirements/extra.txt b/requirements/extra.txt index 1654f05039..0e7dffbcb3 100644 --- a/requirements/extra.txt +++ b/requirements/extra.txt @@ -7,4 +7,4 @@ torchtext>=0.5, <0.7 # TODO: temporary fix fix for compatibility onnx>=1.7.0 onnxruntime>=1.3.0 hydra-core>=1.0 -https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip +https://github.com/PyTorchLightning/fairscale/archive/pl_1.2.0.zip