From 960a60743f5c66b42badc4b343b3dbd28312e9eb Mon Sep 17 00:00:00 2001 From: Jirka Borovec Date: Tue, 16 Feb 2021 20:43:02 +0100 Subject: [PATCH] fix fairscale compatible with PT 1.8 (#5996) * try to extend fairscale available * 1.2 --- docs/source/advanced/multi_gpu.rst | 2 +- requirements/extra.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/advanced/multi_gpu.rst b/docs/source/advanced/multi_gpu.rst index b235e6a458..19e552f351 100644 --- a/docs/source/advanced/multi_gpu.rst +++ b/docs/source/advanced/multi_gpu.rst @@ -685,7 +685,7 @@ To get started, install FairScale using the command below. We install a specific .. code-block:: bash - pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip + pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.2.0.zip To use Sequential Model Parallelism, you must define a :class:`nn.Sequential ` module that defines the layers you wish to parallelize across GPUs. This should be kept within the ``sequential_module`` variable within your ``LightningModule`` like below. diff --git a/requirements/extra.txt b/requirements/extra.txt index 1654f05039..0e7dffbcb3 100644 --- a/requirements/extra.txt +++ b/requirements/extra.txt @@ -7,4 +7,4 @@ torchtext>=0.5, <0.7 # TODO: temporary fix fix for compatibility onnx>=1.7.0 onnxruntime>=1.3.0 hydra-core>=1.0 -https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip +https://github.com/PyTorchLightning/fairscale/archive/pl_1.2.0.zip