Fixes broken LM links (#4117)

This commit is contained in:
William Falcon 2020-10-13 06:42:33 -04:00 committed by GitHub
parent 2d5a7f5e7d
commit 710604838b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 13 additions and 13 deletions

View File

@ -22,7 +22,7 @@ Callback
A callback is a self-contained program that can be reused across projects.
Lightning has a callback system to execute callbacks when needed. Callbacks should capture NON-ESSENTIAL
logic that is NOT required for your :class:`~pytorch_lightning.core.LightningModule` to run.
logic that is NOT required for your :ref:`lightning_module` to run.
Here's the flow of how the callback hooks are executed:
@ -63,7 +63,7 @@ Example:
trainer is init now
We successfully extended functionality without polluting our super clean
:class:`~pytorch_lightning.core.LightningModule` research code.
:ref:`lightning_module` research code.
-----------

View File

@ -16,7 +16,7 @@ To enable your code to work with Lightning, here's how to organize PyTorch into
1. Move your computational code
===============================
Move the model architecture and forward pass to your :class:`~pytorch_lightning.core.LightningModule`.
Move the model architecture and forward pass to your :ref:`lightning_module`.
.. testcode::
@ -115,4 +115,4 @@ The test loop will not be used until you call.
6. Remove any .cuda() or to.device() calls
==========================================
Your :class:`~pytorch_lightning.core.LightningModule` can automatically run on any hardware!
Your :ref:`lightning_module` can automatically run on any hardware!

View File

@ -57,7 +57,7 @@ The research
The Model
---------
The :class:`~pytorch_lightning.core.LightningModule` holds all the core research ingredients:
The :ref:`lightning_module` holds all the core research ingredients:
- The model
@ -98,7 +98,7 @@ Let's first start with the model. In this case we'll design a 3-layer neural net
x = F.log_softmax(x, dim=1)
return x
Notice this is a :class:`~pytorch_lightning.core.LightningModule` instead of a ``torch.nn.Module``. A LightningModule is
Notice this is a :ref:`lightning_module` instead of a ``torch.nn.Module``. A LightningModule is
equivalent to a pure PyTorch Module except it has added functionality. However, you can use it **EXACTLY** the same as you would a PyTorch Module.
.. testcode::

View File

@ -59,7 +59,7 @@ Lightning offers automatic log functionalities for logging scalars, or manual lo
Automatic logging
=================
Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :class:`~pytorch_lightning.core.LightningModule`.
Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :ref:`lightning_module`.
.. code-block:: python

View File

@ -37,10 +37,10 @@ initial lr.
Using Lightning's built-in LR finder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To enable the learning rate finder, your :class:`~pytorch_lightning.core.LightningModule` needs to have a ``learning_rate`` or ``lr`` property.
To enable the learning rate finder, your :ref:`lightning_module` needs to have a ``learning_rate`` or ``lr`` property.
Then, set ``Trainer(auto_lr_find=True)`` during trainer construction,
and then call ``trainer.tune(model)`` to run the LR finder. The suggested ``learning_rate``
will be written to the console and will be automatically set to your :class:`~pytorch_lightning.core.LightningModule`,
will be written to the console and will be automatically set to your :ref:`lightning_module`,
which can be accessed via ``self.learning_rate`` or ``self.lr``.
.. code-block:: python

View File

@ -122,7 +122,7 @@ Step 1: Define LightningModule
**SYTEM VS MODEL**
A :class:`~pytorch_lightning.core.LightningModule` defines a *system* not a model.
A :ref:`lightning_module` defines a *system* not a model.
.. figure:: https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/pl_docs/model_system.png
:width: 400
@ -198,7 +198,7 @@ First, define the data however you want. Lightning just needs a :class:`~torch.u
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train_loader = DataLoader(dataset)
Next, init the :class:`~pytorch_lightning.core.LightningModule` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`,
Next, init the :ref:`lightning_module` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`,
then call fit with both the data and model.
.. code-block:: python

View File

@ -17,7 +17,7 @@ Multi-node training
-------------------
To train a model using multiple nodes, do the following:
1. Design your :class:`~pytorch_lightning.core.LightningModule`.
1. Design your :ref:`lightning_module`.
2. Enable ddp in the trainer

View File

@ -70,7 +70,7 @@ running the test set (ie: 16-bit, dp, ddp, etc...)
Test with additional data loaders
---------------------------------
You can still run inference on a test set even if the `test_dataloader` method hasn't been
defined within your :class:`~pytorch_lightning.core.LightningModule` instance. This would be the case when your test data
defined within your :ref:`lightning_module` instance. This would be the case when your test data
is not available at the time your model was declared.
.. code-block:: python