docs: use ref for anchor links, fix a few typo (#3486)
This commit is contained in:
parent
6bcfa8b068
commit
951048a81e
|
@ -66,16 +66,17 @@ A lot of good work has already been done in project mechanics (requirements/base
|
|||
|
||||
1. If you find a bug please submit a github issue.
|
||||
|
||||
- Make sure the title explains the issue.
|
||||
- Describe your setup, what you are trying to do, expected vs. actual behaviour. Please add configs and code samples.
|
||||
- Add details on how to reproduce the issue - a minimal test case is always best, colab is also great.
|
||||
Note, that the sample code shall be minimal and if needed with publicly available data.
|
||||
- Make sure the title explains the issue.
|
||||
- Describe your setup, what you are trying to do, expected vs. actual behaviour. Please add configs and code samples.
|
||||
- Add details on how to reproduce the issue - a minimal test case is always best, colab is also great.
|
||||
Note, that the sample code shall be minimal and if needed with publicly available data.
|
||||
|
||||
2. Try to fix it or recommend a solution. We highly recommend to use test-driven approach:
|
||||
|
||||
2. Try to fix it or recommend a solution...
|
||||
We highly recommend to use test-driven approach:
|
||||
- Convert your minimal code example to a unit/integration test with assert on expected results.
|
||||
- Start by debugging the issue... You can run just this particular test in your IDE and draft a fix.
|
||||
- Verify that your test case fails on the master branch and only passes with the fix applied.
|
||||
|
||||
3. Submit a PR!
|
||||
|
||||
_**Note**, even if you do not find the solution, sending a PR with a test covering the issue is a valid contribution and we can help you or finish it with you :]_
|
||||
|
@ -84,15 +85,14 @@ _**Note**, even if you do not find the solution, sending a PR with a test coveri
|
|||
|
||||
1. Submit a github issue - describe what is the motivation of such feature (adding the use case or an example is helpful).
|
||||
2. Let's discuss to determine the feature scope.
|
||||
3. Submit a PR!
|
||||
We recommend test driven approach to adding new features as well:
|
||||
3. Submit a PR! We recommend test driven approach to adding new features as well:
|
||||
|
||||
- Write a test for the functionality you want to add.
|
||||
- Write the functional code until the test passes.
|
||||
- Write a test for the functionality you want to add.
|
||||
- Write the functional code until the test passes.
|
||||
|
||||
4. Add/update the relevant tests!
|
||||
|
||||
- [This PR](https://github.com/PyTorchLightning/pytorch-lightning/pull/2671) is a good example for adding a new metric, and [this one for a new logger](https://github.com/PyTorchLightning/pytorch-lightning/pull/2721)
|
||||
- [This PR](https://github.com/PyTorchLightning/pytorch-lightning/pull/2671) is a good example for adding a new metric, and [this one for a new logger](https://github.com/PyTorchLightning/pytorch-lightning/pull/2721).
|
||||
|
||||
### Test cases:
|
||||
|
||||
|
@ -194,6 +194,7 @@ Note: if your computer does not have multi-GPU nor TPU these tests are skipped.
|
|||
This is useful if you do not test against all required dependency versions.
|
||||
|
||||
**Docker:** Another option is utilize the [pytorch lightning cuda base docker image](https://hub.docker.com/repository/docker/pytorchlightning/pytorch_lightning/tags?page=1&name=cuda). You can then run:
|
||||
|
||||
```bash
|
||||
python -m pytest pytorch_lightning tests pl_examples -v --flake8
|
||||
```
|
||||
|
@ -233,13 +234,13 @@ We welcome any useful contribution! For your convenience here's a recommended wo
|
|||
3. **How to rebase my PR?**
|
||||
|
||||
We recommend creating a PR in separate branch other than `master`, especially if you plan submitting several changes and do not want to wait until the first one is resolved (we can work on them in parallel).
|
||||
|
||||
|
||||
First, make sure you have set [upstream](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork) by running:
|
||||
|
||||
```bash
|
||||
git remote add upstream https://github.com/PyTorchLightning/pytorch-lightning.git
|
||||
```
|
||||
|
||||
|
||||
You'll know its set up right if you run `git remote -v` and see something similar to this:
|
||||
|
||||
```bash
|
||||
|
@ -248,8 +249,9 @@ We welcome any useful contribution! For your convenience here's a recommended wo
|
|||
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (fetch)
|
||||
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (push)
|
||||
```
|
||||
|
||||
|
||||
Now you can update your master with upstream's master by running:
|
||||
|
||||
```bash
|
||||
git fetch --all --prune
|
||||
git checkout master
|
||||
|
@ -257,14 +259,16 @@ We welcome any useful contribution! For your convenience here's a recommended wo
|
|||
```
|
||||
|
||||
Finally, checkout your feature branch and rebase it with master before pushing up your feature branch:
|
||||
|
||||
```bash
|
||||
git checkout my-PR-branch
|
||||
git rebase master
|
||||
# follow git instructions to resolve conflicts
|
||||
git push -f
|
||||
```
|
||||
|
||||
|
||||
Eventually, you can perform the rebasing directly from upstream after setting it up:
|
||||
|
||||
```bash
|
||||
git fetch --all --prune
|
||||
git rebase upstream/master
|
||||
|
@ -302,6 +306,7 @@ plrebase (){
|
|||
```
|
||||
|
||||
Now, you can:
|
||||
- clone your fork and set up upstream by running `plclone` from your terminal
|
||||
- fetch upstream and update your local master branch with it by running `plfetch`
|
||||
- rebase your feature branch (after running `plfetch`) by running `plrebase your-branch-name`
|
||||
|
||||
- clone your fork and set up upstream by running `plclone` from your terminal
|
||||
- fetch upstream and update your local master branch with it by running `plfetch`
|
||||
- rebase your feature branch (after running `plfetch`) by running `plrebase your-branch-name`
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _16-bit:
|
||||
.. _amp:
|
||||
|
||||
16-bit training
|
||||
=================
|
|
@ -1,4 +1,4 @@
|
|||
.. _data-modules:
|
||||
.. _datamodules:
|
||||
|
||||
LightningDataModule
|
||||
===================
|
||||
|
|
|
@ -3,14 +3,14 @@
|
|||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
|
||||
|
||||
.. _early-stopping:
|
||||
.. _early_stopping:
|
||||
|
||||
Early stopping
|
||||
==============
|
||||
|
||||
Stopping an epoch early
|
||||
-----------------------
|
||||
You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightning.LightningModule.on_batch_start` to return `-1` when some condition is met.
|
||||
You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightning.LightningModule.on_batch_start` to return ``-1`` when some condition is met.
|
||||
|
||||
If you do this repeatedly, for every epoch you had originally requested, then this will stop your entire run.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
from pytorch_lightning.core.lightning import LightningModule
|
||||
|
||||
.. _experiment-logging:
|
||||
.. _experiment_logging:
|
||||
|
||||
Experiment Logging
|
||||
==================
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _experiment-reporting:
|
||||
.. _experiment_reporting:
|
||||
|
||||
Experiment Reporting
|
||||
=====================
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _fast-training:
|
||||
.. _fast_training:
|
||||
|
||||
Fast Training
|
||||
=============
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
.. _governance:
|
||||
|
||||
Pytorch Lightning Governance | Persons of interest
|
||||
PyTorch Lightning Governance | Persons of interest
|
||||
==================================================
|
||||
|
||||
Leads
|
||||
|
|
|
@ -22,15 +22,15 @@ Hooks lifecycle
|
|||
Training set-up
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.prepare_data`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.setup`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.prepare_data`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.setup`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.init_ddp_connection`
|
||||
- :meth:`~pytorch_lightning.trainer.optimizers.TrainerOptimizersMixin.init_optimizers`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.configure_apex`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.configure_ddp`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.train_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.test_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.val_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.train_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.test_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.val_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.lightning.LightningModule.summarize`
|
||||
- :meth:`~pytorch_lightning.trainer.training_io.TrainerIOMixin.restore_weights`
|
||||
|
||||
|
@ -89,5 +89,8 @@ Test loop
|
|||
General hooks
|
||||
-------------
|
||||
|
||||
.. automodule:: pytorch_lightning.core.hooks
|
||||
.. autoclass:: pytorch_lightning.core.hooks.ModelHooks
|
||||
:noindex:
|
||||
|
||||
.. autoclass:: pytorch_lightning.core.hooks.DataHooks
|
||||
:noindex:
|
||||
|
|
|
@ -11,7 +11,7 @@ PyTorch Lightning Documentation
|
|||
:name: start
|
||||
:caption: Getting started
|
||||
|
||||
new-project
|
||||
new_project
|
||||
converting
|
||||
|
||||
.. toctree::
|
||||
|
@ -27,7 +27,7 @@ PyTorch Lightning Documentation
|
|||
:name: pl_docs
|
||||
:caption: Lightning API
|
||||
|
||||
lightning-module
|
||||
lightning_module
|
||||
trainer
|
||||
|
||||
.. toctree::
|
||||
|
@ -84,7 +84,7 @@ PyTorch Lightning Documentation
|
|||
:name: Common Use Cases
|
||||
:caption: Common Use Cases
|
||||
|
||||
apex
|
||||
amp
|
||||
slurm
|
||||
child_modules
|
||||
debugging
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
from pytorch_lightning.core.datamodule import LightningDataModule
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _introduction-guide:
|
||||
.. _introduction_guide:
|
||||
|
||||
#########################
|
||||
Step-by-step walk-through
|
||||
|
@ -510,7 +510,7 @@ Which will generate automatic tensorboard logs.
|
|||
|
||||
|
|
||||
|
||||
But you can also use any of the `number of other loggers <loggers.rst>`_ we support.
|
||||
But you can also use any of the :ref:`number of other loggers <loggers>` we support.
|
||||
|
||||
|
||||
Train on CPU
|
||||
|
@ -562,7 +562,7 @@ Or multiple nodes
|
|||
trainer = Trainer(gpus=8, num_nodes=4, distributed_backend='ddp')
|
||||
trainer.fit(model, train_loader)
|
||||
|
||||
Refer to the `distributed computing guide for more details <https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html>`_.
|
||||
Refer to the :ref:`distributed computing guide for more details <multi_gpu>`.
|
||||
|
||||
train on TPUs
|
||||
^^^^^^^^^^^^^
|
||||
|
@ -967,7 +967,7 @@ you could do your own:
|
|||
return model
|
||||
|
||||
Every single part of training is configurable this way.
|
||||
For a full list look at `LightningModule <lightning-module.rst>`_.
|
||||
For a full list look at :ref:`LightningModule <lightning_module>`.
|
||||
|
||||
----------------
|
||||
|
||||
|
@ -1102,7 +1102,7 @@ would be the particular system and how it's trained (ie: A GAN or VAE or GPT).
|
|||
|
||||
loss = perceptual_loss(x1, x2, x) + CE(out, x)
|
||||
|
||||
In Lightning, this code is organized into a :ref:`lightning-module`.
|
||||
In Lightning, this code is organized into a :ref:`lightning_module`.
|
||||
|
||||
Engineering code
|
||||
================
|
||||
|
@ -1167,6 +1167,6 @@ spread all over files.
|
|||
This code gets specially complicated once you start doing multi-gpu training or needing info about
|
||||
the data to build your models.
|
||||
|
||||
In Lightning this code is organized inside a :ref:`data-modules`.
|
||||
In Lightning this code is organized inside a :ref:`datamodules`.
|
||||
|
||||
.. note:: DataModules are optional but encouraged, otherwise you can use standard DataModules
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
.. role:: hidden
|
||||
:class: hidden-section
|
||||
.. _lightning-module:
|
||||
|
||||
.. _lightning_module:
|
||||
|
||||
LightningModule
|
||||
===============
|
|
@ -4,7 +4,7 @@
|
|||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
from pytorch_lightning.core.lightning import LightningModule
|
||||
|
||||
.. _multi-gpu-training:
|
||||
.. _multi_gpu:
|
||||
|
||||
Multi-GPU training
|
||||
==================
|
||||
|
@ -213,7 +213,7 @@ Lightning sets these for you automatically, there's NO NEED to do this yourself.
|
|||
|
||||
However, when using a cluster, Lightning will NOT set these flags (and you should not either).
|
||||
SLURM will set these for you.
|
||||
For more details see the `SLURM cluster guide <slurm.rst>`_.
|
||||
For more details see the :ref:`SLURM cluster guide <slurm>`.
|
||||
|
||||
----------
|
||||
|
||||
|
|
|
@ -68,8 +68,8 @@ option of passing in multiple dataloaders back from each call.
|
|||
|
||||
See the following for more details:
|
||||
|
||||
- :meth:`~pytorch_lightning.core.LightningModule.val_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.LightningModule.test_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.val_dataloader`
|
||||
- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.test_dataloader`
|
||||
|
||||
.. testcode::
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
import pytorch_lightning as pl
|
||||
from torch.utils.data import random_split
|
||||
|
||||
.. _3-steps:
|
||||
.. _new_project:
|
||||
|
||||
####################
|
||||
Lightning in 2 steps
|
||||
|
@ -130,7 +130,7 @@ You can use your :class:`~pytorch_lightning.core.LightningModule` just like a Py
|
|||
|
||||
model.anything_you_can_do_with_pytorch()
|
||||
|
||||
More details in :ref:`lightning-module` docs.
|
||||
More details in :ref:`lightning_module` docs.
|
||||
|
||||
|
||||
----------
|
||||
|
@ -187,7 +187,7 @@ Optional features
|
|||
TrainResult/EvalResult
|
||||
======================
|
||||
Instead of returning the loss you can also use :class:`~pytorch_lightning.core.step_result.TrainResult` and :class:`~pytorch_lightning.core.step_result.EvalResult`, plain Dict objects that give you options for logging on every step and/or at the end of the epoch.
|
||||
It also allows logging to the progress bar (by setting prog_bar=True). Read more in :ref:`result`.
|
||||
It also allows logging to the progress bar (by setting prog_bar=True). Read more in :ref:`results`.
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
@ -333,7 +333,7 @@ the :class:`~pytorch_lightning.trainer.Trainer`:
|
|||
# test
|
||||
trainer.test(datamodule=dm)
|
||||
|
||||
DataModules are specifically useful for building models based on data. Read more on :ref:`data-modules`.
|
||||
DataModules are specifically useful for building models based on data. Read more on :ref:`datamodules`.
|
||||
|
||||
----------
|
||||
|
||||
|
@ -418,21 +418,21 @@ Advanced Lightning Features
|
|||
Once you define and train your first Lightning model, you might want to try other cool features like
|
||||
|
||||
- :ref:`loggers`
|
||||
- `Automatic checkpointing <https://pytorch-lightning.readthedocs.io/en/stable/weights_loading.html>`_
|
||||
- `Automatic early stopping <https://pytorch-lightning.readthedocs.io/en/stable/early_stopping.html>`_
|
||||
- `Add custom callbacks <https://pytorch-lightning.readthedocs.io/en/stable/callbacks.html>`_ (self-contained programs that can be reused across projects)
|
||||
- `Dry run mode <https://pytorch-lightning.readthedocs.io/en/stable/debugging.html#fast-dev-run>`_ (Hit every line of your code once to see if you have bugs, instead of waiting hours to crash on validation ;)
|
||||
- `Automatically overfit your model for a sanity test <https://pytorch-lightning.readthedocs.io/en/stable/debugging.html?highlight=overfit#make-model-overfit-on-subset-of-data>`_
|
||||
- `Automatic truncated-back-propagation-through-time <https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.training_loop.html?highlight=truncated#truncated-backpropagation-through-time>`_
|
||||
- `Automatically scale your batch size <https://pytorch-lightning.readthedocs.io/en/stable/training_tricks.html?highlight=batch%20size#auto-scaling-of-batch-size>`_
|
||||
- `Automatically find a good learning rate <https://pytorch-lightning.readthedocs.io/en/stable/lr_finder.html>`_
|
||||
- `Load checkpoints directly from S3 <https://pytorch-lightning.readthedocs.io/en/stable/weights_loading.html#checkpoint-loading>`_
|
||||
- `Profile your code for speed/memory bottlenecks <https://pytorch-lightning.readthedocs.io/en/stable/profiler.html>`_
|
||||
- `Scale to massive compute clusters <https://pytorch-lightning.readthedocs.io/en/stable/slurm.html>`_
|
||||
- `Use multiple dataloaders per train/val/test loop <https://pytorch-lightning.readthedocs.io/en/stable/multiple_loaders.html>`_
|
||||
- `Use multiple optimizers to do Reinforcement learning or even GANs <https://pytorch-lightning.readthedocs.io/en/stable/optimizers.html?highlight=multiple%20optimizers#use-multiple-optimizers-like-gans>`_
|
||||
- :ref:`Automatic checkpointing <weights_loading>`
|
||||
- :ref:`Automatic early stopping <early_stopping>`
|
||||
- :ref:`Add custom callbacks <callbacks>` (self-contained programs that can be reused across projects)
|
||||
- :ref:`Dry run mode <debugging:fast_dev_run>` (Hit every line of your code once to see if you have bugs, instead of waiting hours to crash on validation :)
|
||||
- :ref:`Automatically overfit your model for a sanity test <debugging:Make model overfit on subset of data>`
|
||||
- :ref:`Automatic truncated-back-propagation-through-time <trainer:truncated_bptt_steps>`
|
||||
- :ref:`Automatically scale your batch size <training_tricks:Auto scaling of batch size>`
|
||||
- :ref:`Automatically find a good learning rate <lr_finder>`
|
||||
- :ref:`Load checkpoints directly from S3 <weights_loading:Checkpoint Loading>`
|
||||
- :ref:`Profile your code for speed/memory bottlenecks <profiler>`
|
||||
- :ref:`Scale to massive compute clusters <slurm>`
|
||||
- :ref:`Use multiple dataloaders per train/val/test loop <multiple_loaders>`
|
||||
- :ref:`Use multiple optimizers to do Reinforcement learning or even GANs <optimizers:Use multiple optimizers (like GANs)>`
|
||||
|
||||
Or read our :ref:`introduction-guide` to learn more!
|
||||
Or read our :ref:`introduction_guide` to learn more!
|
||||
|
||||
-------------
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
.. _production-inference:
|
||||
.. _production_inference:
|
||||
|
||||
Inference in Production
|
||||
=======================
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. _result:
|
||||
.. _results:
|
||||
|
||||
Result
|
||||
======
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _single-gpu:
|
||||
.. _single_gpu:
|
||||
|
||||
Single GPU Training
|
||||
===================
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. _test-set:
|
||||
.. _test_set:
|
||||
|
||||
Test set
|
||||
========
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
|
||||
.. _training-tricks:
|
||||
.. _training_tricks:
|
||||
|
||||
Training Tricks
|
||||
================
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
from pytorch_lightning.trainer.trainer import Trainer
|
||||
from pytorch_lightning.core.lightning import LightningModule
|
||||
|
||||
.. _weights-loading:
|
||||
.. _weights_loading:
|
||||
|
||||
Saving and loading weights
|
||||
==========================
|
||||
|
|
|
@ -138,7 +138,7 @@ class ProgressBarBase(Callback):
|
|||
"""
|
||||
You should provide a way to enable the progress bar.
|
||||
The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this in e.g. pre-training
|
||||
routines like the `learning rate finder <lr_finder.rst>`_ to temporarily enable and
|
||||
routines like the :ref:`learning rate finder <lr_finder>` to temporarily enable and
|
||||
disable the main progress bar.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
|
|
@ -335,7 +335,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod
|
|||
return result
|
||||
|
||||
See Also:
|
||||
See the :ref:`multi-gpu-training` guide for more details.
|
||||
See the :ref:`multi_gpu` guide for more details.
|
||||
"""
|
||||
|
||||
def training_epoch_end(
|
||||
|
@ -531,7 +531,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod
|
|||
return result
|
||||
|
||||
See Also:
|
||||
See the :ref:`multi-gpu-training` guide for more details.
|
||||
See the :ref:`multi_gpu` guide for more details.
|
||||
"""
|
||||
|
||||
def validation_end(self, outputs):
|
||||
|
@ -732,7 +732,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod
|
|||
return result
|
||||
|
||||
See Also:
|
||||
See the :ref:`multi-gpu-training` guide for more details.
|
||||
See the :ref:`multi_gpu` guide for more details.
|
||||
"""
|
||||
|
||||
def test_end(self, outputs):
|
||||
|
|
|
@ -261,7 +261,7 @@ Example::
|
|||
trainer = Trainer(auto_lr_find='my_lr_arg')
|
||||
|
||||
.. note::
|
||||
See the `learning rate finder guide <lr_finder.rst>`_
|
||||
See the :ref:`learning rate finder guide <lr_finder>`.
|
||||
|
||||
benchmark
|
||||
^^^^^^^^^
|
||||
|
@ -402,8 +402,8 @@ Example::
|
|||
.. note:: this option does not apply to TPU. TPUs use ```ddp``` by default (over each core)
|
||||
|
||||
See Also:
|
||||
- `Multi-GPU training guide <multi_gpu.rst>`_
|
||||
- `Multi-node (SLURM) guide <slurm.rst>`_
|
||||
- :ref:`Multi-GPU training guide <multi_gpu>`.
|
||||
- :ref:`Multi-node (SLURM) guide <slurm>`.
|
||||
|
||||
early_stop_callback
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -499,7 +499,7 @@ Example::
|
|||
trainer = Trainer(gpus=[1, 4], num_nodes=4)
|
||||
|
||||
See Also:
|
||||
- `Multi-GPU training guide <multi_gpu.rst>`_
|
||||
- :ref:`Multi-GPU training guide <multi_gpu>`.
|
||||
|
||||
gradient_clip_val
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
@ -584,7 +584,7 @@ Writes logs to disk this often.
|
|||
logger
|
||||
^^^^^^
|
||||
|
||||
`Logger <loggers.rst>`_ (or iterable collection of loggers) for experiment tracking.
|
||||
:ref:`Logger <loggers>` (or iterable collection of loggers) for experiment tracking.
|
||||
|
||||
.. testcode::
|
||||
|
||||
|
@ -820,7 +820,7 @@ profiler
|
|||
^^^^^^^^
|
||||
To profile individual steps during training and assist in identifying bottlenecks.
|
||||
|
||||
See the `profiler documentation <profiler.rst>`_. for more details.
|
||||
See the :ref:`profiler documentation <profiler>`. for more details.
|
||||
|
||||
.. testcode::
|
||||
|
||||
|
|
Loading…
Reference in New Issue