From 710604838b2c7fc00b9f8a53561a79da0a741dd4 Mon Sep 17 00:00:00 2001 From: William Falcon Date: Tue, 13 Oct 2020 06:42:33 -0400 Subject: [PATCH] Fixes broken LM links (#4117) --- docs/source/callbacks.rst | 4 ++-- docs/source/converting.rst | 4 ++-- docs/source/introduction_guide.rst | 4 ++-- docs/source/logging.rst | 2 +- docs/source/lr_finder.rst | 4 ++-- docs/source/new-project.rst | 4 ++-- docs/source/slurm.rst | 2 +- docs/source/test_set.rst | 2 +- 8 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs/source/callbacks.rst b/docs/source/callbacks.rst index f9ad593fb8..eff00fe966 100644 --- a/docs/source/callbacks.rst +++ b/docs/source/callbacks.rst @@ -22,7 +22,7 @@ Callback A callback is a self-contained program that can be reused across projects. Lightning has a callback system to execute callbacks when needed. Callbacks should capture NON-ESSENTIAL -logic that is NOT required for your :class:`~pytorch_lightning.core.LightningModule` to run. +logic that is NOT required for your :ref:`lightning_module` to run. Here's the flow of how the callback hooks are executed: @@ -63,7 +63,7 @@ Example: trainer is init now We successfully extended functionality without polluting our super clean -:class:`~pytorch_lightning.core.LightningModule` research code. +:ref:`lightning_module` research code. ----------- diff --git a/docs/source/converting.rst b/docs/source/converting.rst index 7ece12cb78..b292197588 100644 --- a/docs/source/converting.rst +++ b/docs/source/converting.rst @@ -16,7 +16,7 @@ To enable your code to work with Lightning, here's how to organize PyTorch into 1. Move your computational code =============================== -Move the model architecture and forward pass to your :class:`~pytorch_lightning.core.LightningModule`. +Move the model architecture and forward pass to your :ref:`lightning_module`. .. testcode:: @@ -115,4 +115,4 @@ The test loop will not be used until you call. 6. Remove any .cuda() or to.device() calls ========================================== -Your :class:`~pytorch_lightning.core.LightningModule` can automatically run on any hardware! +Your :ref:`lightning_module` can automatically run on any hardware! diff --git a/docs/source/introduction_guide.rst b/docs/source/introduction_guide.rst index f2f82eed1b..bcdb389760 100644 --- a/docs/source/introduction_guide.rst +++ b/docs/source/introduction_guide.rst @@ -57,7 +57,7 @@ The research The Model --------- -The :class:`~pytorch_lightning.core.LightningModule` holds all the core research ingredients: +The :ref:`lightning_module` holds all the core research ingredients: - The model @@ -98,7 +98,7 @@ Let's first start with the model. In this case we'll design a 3-layer neural net x = F.log_softmax(x, dim=1) return x -Notice this is a :class:`~pytorch_lightning.core.LightningModule` instead of a ``torch.nn.Module``. A LightningModule is +Notice this is a :ref:`lightning_module` instead of a ``torch.nn.Module``. A LightningModule is equivalent to a pure PyTorch Module except it has added functionality. However, you can use it **EXACTLY** the same as you would a PyTorch Module. .. testcode:: diff --git a/docs/source/logging.rst b/docs/source/logging.rst index 0ff856ef59..a1d16a4ddc 100644 --- a/docs/source/logging.rst +++ b/docs/source/logging.rst @@ -59,7 +59,7 @@ Lightning offers automatic log functionalities for logging scalars, or manual lo Automatic logging ================= -Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :class:`~pytorch_lightning.core.LightningModule`. +Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :ref:`lightning_module`. .. code-block:: python diff --git a/docs/source/lr_finder.rst b/docs/source/lr_finder.rst index cf4dc8718f..2521ec73e3 100755 --- a/docs/source/lr_finder.rst +++ b/docs/source/lr_finder.rst @@ -37,10 +37,10 @@ initial lr. Using Lightning's built-in LR finder ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To enable the learning rate finder, your :class:`~pytorch_lightning.core.LightningModule` needs to have a ``learning_rate`` or ``lr`` property. +To enable the learning rate finder, your :ref:`lightning_module` needs to have a ``learning_rate`` or ``lr`` property. Then, set ``Trainer(auto_lr_find=True)`` during trainer construction, and then call ``trainer.tune(model)`` to run the LR finder. The suggested ``learning_rate`` -will be written to the console and will be automatically set to your :class:`~pytorch_lightning.core.LightningModule`, +will be written to the console and will be automatically set to your :ref:`lightning_module`, which can be accessed via ``self.learning_rate`` or ``self.lr``. .. code-block:: python diff --git a/docs/source/new-project.rst b/docs/source/new-project.rst index 829717b8d4..b2e527eb01 100644 --- a/docs/source/new-project.rst +++ b/docs/source/new-project.rst @@ -122,7 +122,7 @@ Step 1: Define LightningModule **SYTEM VS MODEL** -A :class:`~pytorch_lightning.core.LightningModule` defines a *system* not a model. +A :ref:`lightning_module` defines a *system* not a model. .. figure:: https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/pl_docs/model_system.png :width: 400 @@ -198,7 +198,7 @@ First, define the data however you want. Lightning just needs a :class:`~torch.u dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor()) train_loader = DataLoader(dataset) -Next, init the :class:`~pytorch_lightning.core.LightningModule` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`, +Next, init the :ref:`lightning_module` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`, then call fit with both the data and model. .. code-block:: python diff --git a/docs/source/slurm.rst b/docs/source/slurm.rst index 9d3a4ab3c9..4aa8a4f420 100644 --- a/docs/source/slurm.rst +++ b/docs/source/slurm.rst @@ -17,7 +17,7 @@ Multi-node training ------------------- To train a model using multiple nodes, do the following: -1. Design your :class:`~pytorch_lightning.core.LightningModule`. +1. Design your :ref:`lightning_module`. 2. Enable ddp in the trainer diff --git a/docs/source/test_set.rst b/docs/source/test_set.rst index 24e1907ac6..8d8edce672 100644 --- a/docs/source/test_set.rst +++ b/docs/source/test_set.rst @@ -70,7 +70,7 @@ running the test set (ie: 16-bit, dp, ddp, etc...) Test with additional data loaders --------------------------------- You can still run inference on a test set even if the `test_dataloader` method hasn't been -defined within your :class:`~pytorch_lightning.core.LightningModule` instance. This would be the case when your test data +defined within your :ref:`lightning_module` instance. This would be the case when your test data is not available at the time your model was declared. .. code-block:: python