diff --git a/docs/source/multi_gpu.rst b/docs/source/multi_gpu.rst index 2ce66c9a71..def4781050 100644 --- a/docs/source/multi_gpu.rst +++ b/docs/source/multi_gpu.rst @@ -593,9 +593,9 @@ Below are the possible configurations we support. Implement Your Own Distributed (DDP) training ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.core.LightningModule.`. +If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.init_ddp_connection`. -If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.core.LightningModule.configure_ddp`. +If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.configure_ddp`. ---------- diff --git a/docs/source/weights_loading.rst b/docs/source/weights_loading.rst index 5dc80b51e5..f22e355a09 100644 --- a/docs/source/weights_loading.rst +++ b/docs/source/weights_loading.rst @@ -46,7 +46,7 @@ You can customize the checkpointing behavior to monitor any quantity of your tra 1. Calculate any metric or other quantity you wish to monitor, such as validation loss. 2. Log the quantity using :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method, with a key such as `val_loss`. 3. Initializing the :class:`~pytorch_lightning.callbacks.ModelCheckpoint` callback, and set `monitor` to be the key of your quantity. -4. Pass the callback to `checkpoint_callback` :class:`~pytorch_lightning.trainer.Trainer` flag. +4. Pass the callback to the `callbacks` :class:`~pytorch_lightning.trainer.Trainer` flag. .. code-block:: python