Add labels to sphinx docs (#2964)

* Add label

* add ref

* add ref

* add ref

* add label

* add label

* add label

* add label

* Update fast_training.rst

* label

* label

* label

* label

* label

* label

* label

* label

* label

* label

* label

* Update performance.rst

* Update production_inference.rst

* Update profiler.rst

* Update results.rst

* Update sequences.rst

* Update single_gpu.rst

* Update slurm.rst

* Update test_set.rst

* Update tpu.rst

* Update trainer.rst

* Update training_tricks.rst

* Update transfer_learning.rst

* Update weights_loading.rst

* Update governance.rst

* Update hooks.rst

* Update bolts.rst

* Update child_modules.rst

* Update hyperparameters.rst

* Update transfer_learning.rst
This commit is contained in:
edenlightning 2020-08-13 18:56:51 -04:00 committed by GitHub
parent b7fc805dcf
commit 5e7ae348b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
32 changed files with 52 additions and 7 deletions

View File

@ -2,6 +2,7 @@
from pytorch_lightning.trainer.trainer import Trainer
.. _16-bit:
16-bit training
=================

View File

@ -16,7 +16,6 @@
def val_dataloader():
pass
Child Modules
-------------
Research projects tend to test different approaches to the same dataset.

View File

@ -1,3 +1,5 @@
.. _data-modules:
LightningDataModule
===================
A datamodule is a shareable, reusable class that encapsulates all the steps needed to process data:

View File

@ -1,6 +1,8 @@
.. testsetup:: *
from pytorch_lightning.trainer.trainer import Trainer
.. _debugging:
Debugging
=========
@ -129,4 +131,4 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
.. testcode::
# DEFAULT
trainer = Trainer(num_sanity_val_steps=2)
trainer = Trainer(num_sanity_val_steps=2)

View File

@ -3,6 +3,7 @@
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
.. _early-stopping:
Early stopping
==============

View File

@ -3,6 +3,7 @@
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.core.lightning import LightningModule
.. _experiment-logging:
Experiment Logging
==================

View File

@ -2,6 +2,7 @@
from pytorch_lightning.trainer.trainer import Trainer
.. _experiment-reporting:
Experiment Reporting
=====================

View File

@ -2,6 +2,7 @@
from pytorch_lightning.trainer.trainer import Trainer
.. _fast-training:
Fast Training
=============

View File

@ -1,3 +1,5 @@
.. _governance:
Pytorch Lightning Governance | Persons of interest
==================================================

View File

@ -1,3 +1,5 @@
.. _hooks:
Model Hooks
===========
@ -88,4 +90,4 @@ General hooks
-------------
.. automodule:: pytorch_lightning.core.hooks
:noindex:
:noindex:

View File

@ -7,7 +7,6 @@
import sys
sys.argv = ['foo']
Hyperparameters
---------------
Lightning has utilities to interact seamlessly with the command line ArgumentParser

View File

@ -3,6 +3,7 @@
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.trainer.trainer import Trainer
.. _introduction-guide:
Step-by-step walk-through
=========================

View File

@ -1,5 +1,6 @@
.. role:: hidden
:class: hidden-section
.. _lightning-module:
LightningModule
===============

View File

@ -6,6 +6,8 @@
.. role:: hidden
:class: hidden-section
.. _loggers:
Loggers
===========

View File

@ -2,6 +2,8 @@
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.core.lightning import LightningModule
.. _lr_finder:
Learning Rate Finder
--------------------

View File

@ -5,6 +5,8 @@
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.metrics import TensorMetric, NumpyMetric
.. _metrics:
Metrics
=======
This is a general package for PyTorch Metrics. These can also be used with regular non-lightning PyTorch code.

View File

@ -2,6 +2,8 @@
from pytorch_lightning.core.lightning import LightningModule
.. _multiple_loaders:
Multiple Datasets
=================
Lightning supports multiple dataloaders in a few ways.

View File

@ -7,6 +7,7 @@
from torch.nn import functional as F
from torch.utils.data import DataLoader
.. _quick-start:
Quick Start
===========

View File

@ -1,3 +1,5 @@
.. _optimizers:
Optimization
===============

View File

@ -1,3 +1,5 @@
.. _performance:
Fast Performance
================
Here are some best practices to increase your performance.

View File

@ -1,3 +1,5 @@
.. _production-inference:
Inference in Production
=======================
PyTorch Lightning eases the process of deploying models into production.

View File

@ -1,6 +1,7 @@
.. role:: hidden
:class: hidden-section
.. _profiler:
Performance and Bottleneck Profiler
===================================

View File

@ -1,3 +1,5 @@
.. _result:
Result
======
Lightning has two results objects `TrainResult` and `EvalResult`.

View File

@ -2,6 +2,8 @@
from torch.utils.data import IterableDataset
from pytorch_lightning.trainer.trainer import Trainer
.. _sequences:
Sequential Data
================

View File

@ -2,6 +2,8 @@
from pytorch_lightning.trainer.trainer import Trainer
.. _single-gpu:
Single GPU Training
===================
Make sure you are running on a machine that has at least one GPU. Lightning handles all the NVIDIA flags for you,
@ -11,4 +13,4 @@ there's no need to set them yourself.
:skipif: torch.cuda.device_count() < 1
# train on 1 GPU (using dp mode)
trainer = Trainer(gpus=1)
trainer = Trainer(gpus=1)

View File

@ -1,6 +1,8 @@
.. testsetup:: *
from pytorch_lightning.trainer.trainer import Trainer
.. _slurm:
Computing cluster (SLURM)
=========================

View File

@ -1,3 +1,5 @@
.. _test-set:
Test set
========
Lightning forces the user to run the test set separately to make sure it isn't evaluated by mistake.

View File

@ -1,3 +1,5 @@
.. _tpu:
TPU support
===========

View File

@ -1,6 +1,8 @@
.. role:: hidden
:class: hidden-section
.. _trainer:
Trainer
=======
.. automodule:: pytorch_lightning.trainer

View File

@ -2,6 +2,7 @@
from pytorch_lightning.trainer.trainer import Trainer
.. _training-tricks:
Training Tricks
================

View File

@ -1,7 +1,7 @@
.. testsetup:: *
from pytorch_lightning.core.lightning import LightningModule
Transfer Learning
-----------------
@ -115,4 +115,4 @@ Here's a model that uses `Huggingface transformers <https://github.com/huggingfa
h_cls = h[:, 0]
logits = self.W(h_cls)
return logits, attn
return logits, attn

View File

@ -4,6 +4,7 @@
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.core.lightning import LightningModule
.. _weights-loading:
Saving and loading weights
==========================