From 7e8673debd2f8bc8ae639063c4aee71f774fc304 Mon Sep 17 00:00:00 2001 From: Alan Du Date: Thu, 10 Dec 2020 12:26:02 -0500 Subject: [PATCH] Update DDP docs (#5046) * Fix flake8 error to fix CI * Correct weights-loading to use correct callbacks * Fix dangling links Co-authored-by: chaton Co-authored-by: Jirka Borovec --- docs/source/multi_gpu.rst | 4 ++-- docs/source/weights_loading.rst | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/multi_gpu.rst b/docs/source/multi_gpu.rst index 2ce66c9a71..def4781050 100644 --- a/docs/source/multi_gpu.rst +++ b/docs/source/multi_gpu.rst @@ -593,9 +593,9 @@ Below are the possible configurations we support. Implement Your Own Distributed (DDP) training ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.core.LightningModule.`. +If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.init_ddp_connection`. -If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.core.LightningModule.configure_ddp`. +If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.configure_ddp`. ---------- diff --git a/docs/source/weights_loading.rst b/docs/source/weights_loading.rst index 5dc80b51e5..f22e355a09 100644 --- a/docs/source/weights_loading.rst +++ b/docs/source/weights_loading.rst @@ -46,7 +46,7 @@ You can customize the checkpointing behavior to monitor any quantity of your tra 1. Calculate any metric or other quantity you wish to monitor, such as validation loss. 2. Log the quantity using :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method, with a key such as `val_loss`. 3. Initializing the :class:`~pytorch_lightning.callbacks.ModelCheckpoint` callback, and set `monitor` to be the key of your quantity. -4. Pass the callback to `checkpoint_callback` :class:`~pytorch_lightning.trainer.Trainer` flag. +4. Pass the callback to the `callbacks` :class:`~pytorch_lightning.trainer.Trainer` flag. .. code-block:: python