Update auto-opt docs (#6037)

* fix docs

* update on comments

* Apply suggestions from code review

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Apply suggestions from code review

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Apply suggestions from code review

Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>

* rm comment

* Update docs/source/common/lightning_module.rst

Co-authored-by: chaton <thomas@grid.ai>

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
This commit is contained in:
Rohit Gupta 2021-02-19 01:21:56 +05:30 committed by GitHub
parent c46c23a69f
commit b0074a471a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 108 additions and 68 deletions

View File

@ -72,7 +72,7 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major
<details>
<summary>Current build statuses</summary>
<center>
| System / PyTorch ver. | 1.4 (min. req.)* | 1.5 | 1.6 | 1.7 (latest) | 1.8 (nightly) |
@ -93,9 +93,9 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major
<details>
<summary>Bleeding edge build status (1.2)</summary>
<center>
![CI base testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20base%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
![CI complete testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=release%2F1.2-dev&event=push)
@ -121,13 +121,13 @@ pip install pytorch-lightning
<!-- following section will be skipped from PyPI description -->
#### Install with optional dependencies
```bash
pip install pytorch-lightning['extra']
```
#### Conda
```bash
conda install pytorch-lightning -c conda-forge
```
@ -229,7 +229,7 @@ Here are some examples:
<details>
<summary>Highlighted feature code snippets</summary>
```python
# 8 GPUs
# no code changes needed
@ -240,58 +240,58 @@ Here are some examples:
```
<summary>Train on TPUs without code changes</summary>
```python
# no code changes needed
trainer = Trainer(tpu_cores=8)
```
<summary>16-bit precision</summary>
```python
# no code changes needed
trainer = Trainer(precision=16)
```
<summary>Experiment managers</summary>
```python
from pytorch_lightning import loggers
# tensorboard
trainer = Trainer(logger=TensorBoardLogger('logs/'))
# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())
# comet
trainer = Trainer(logger=loggers.CometLogger())
# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())
# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())
# ... and dozens more
```
<summary>EarlyStopping</summary>
```python
es = EarlyStopping(monitor='val_loss')
trainer = Trainer(callbacks=[es])
```
<summary>Checkpointing</summary>
```python
checkpointing = ModelCheckpoint(monitor='val_loss')
trainer = Trainer(callbacks=[checkpointing])
```
<summary>Export to torchscript (JIT) (production use)</summary>
```python
# torchscript
autoencoder = LitAutoEncoder()
@ -299,7 +299,7 @@ Here are some examples:
```
<summary>Export to ONNX (production use)</summary>
```python
# onnx
with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
@ -315,6 +315,10 @@ For complex/professional level work, you have optional full control of the train
```python
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.automatic_optimization = False
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)

View File

@ -841,7 +841,7 @@ The current step (does not reset each epoch)
hparams
~~~~~~~
After calling `save_hyperparameters` anything passed to init() is available via hparams.
After calling ``save_hyperparameters`` anything passed to ``__init__()`` is available via the ``hparams`` attribute.
.. code-block:: python
@ -932,9 +932,85 @@ True if using TPUs
--------------
automatic_optimization
~~~~~~~~~~~~~~~~~~~~~~
When set to ``False``, Lightning does not automate the optimization process. This means you are responsible for handling your optimizers. However, we do take care of precision and any accelerators used.
.. code-block:: python
def __init__(self):
self.automatic_optimization = False
def training_step(self, batch, batch_idx):
opt = self.optimizers(use_pl_optimizer=True)
loss = ...
self.manual_backward(loss, opt)
opt.step()
opt.zero_grad()
This is recommended only if using 2+ optimizers AND if you know how to perform the optimization procedure properly. Note that automatic optimization can still be used with multiple optimizers by relying on the ``optimizer_idx`` parameter. Manual optimization is most useful for research topics like reinforcement learning, sparse coding, and GAN research.
In the multi-optimizer case, ignore the ``optimizer_idx`` argument and use the optimizers directly
.. code-block:: python
def __init__(self):
self.automatic_optimization = False
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)
gen_loss = ...
opt_a.zero_grad()
self.manual_backward(gen_loss, opt_a)
opt_a.step()
disc_loss = ...
opt_b.zero_grad()
self.manual_backward(disc_loss, opt_b)
opt_b.step()
--------------
example_input_array
~~~~~~~~~~~~~~~~~~~
Set and access example_input_array which is basically a single batch.
.. code-block:: python
def __init__(self):
self.example_input_array = ...
self.generator = ...
def on_train_epoch_end(...):
# generate some images using the example_input_array
gen_images = self.generator(self.example_input_array)
--------------
datamodule
~~~~~~~~~~
Set or access your datamodule.
.. code-block:: python
def configure_optimizers(self):
num_training_samples = len(self.datamodule.train_dataloader())
...
--------------
model_size
~~~~~~~~~~
Get the model file size (in megabytes) using ``self.model_size`` inside LightningModule.
--------------
Hooks
^^^^^
This is the pseudocode to describe how all the hooks are called during a call to `.fit()`
This is the pseudocode to describe how all the hooks are called during a call to ``.fit()``.
.. code-block:: python

View File

@ -9,7 +9,7 @@ Exporting to ONNX
-----------------
PyTorch Lightning provides a handy function to quickly export your model to ONNX format, which allows the model to be independent of PyTorch and run on an ONNX Runtime.
To export your model to ONNX format call the `to_onnx` function on your Lightning Module with the filepath and input_sample.
To export your model to ONNX format call the ``to_onnx`` function on your Lightning Module with the filepath and input_sample.
.. code-block:: python
@ -18,7 +18,7 @@ To export your model to ONNX format call the `to_onnx` function on your Lightnin
input_sample = torch.randn((1, 64))
model.to_onnx(filepath, input_sample, export_params=True)
You can also skip passing the input sample if the `example_input_array` property is specified in your LightningModule.
You can also skip passing the input sample if the ` example_input_array ` property is specified in your LightningModule.
Once you have the exported model, you can run it on your ONNX runtime in the following way:

View File

@ -330,43 +330,6 @@ Example::
# default used by the Trainer
trainer = Trainer(amp_level='O2')
automatic_optimization
^^^^^^^^^^^^^^^^^^^^^^
When set to False, Lightning does not automate the optimization process. This means you are responsible for your own
optimizer behavior
Example::
def training_step(self, batch, batch_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
opt = self.optimizers(use_pl_optimizer=True)
loss = ...
self.manual_backward(loss, opt)
opt.step()
opt.zero_grad()
This is not recommended when using a single optimizer, instead it's recommended when using 2+ optimizers
AND you are an expert user. Most useful for research like RL, sparse coding and GAN research.
In the multi-optimizer case, ignore the optimizer_idx flag and use the optimizers directly
Example::
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)
gen_loss = ...
self.manual_backward(gen_loss, opt_a)
opt_a.step()
opt_a.zero_grad()
disc_loss = ...
self.manual_backward(disc_loss, opt_b)
opt_b.step()
opt_b.zero_grad()
auto_scale_batch_size
^^^^^^^^^^^^^^^^^^^^^

View File

@ -258,16 +258,13 @@ Manual optimization
However, for certain research like GANs, reinforcement learning, or something with multiple optimizers
or an inner loop, you can turn off automatic optimization and fully control the training loop yourself.
First, turn off automatic optimization:
.. testcode::
trainer = Trainer(automatic_optimization=False)
Now you own the train loop!
Turn off automatic optimization and you control the train loop!
.. code-block:: python
def __init__(self):
self.automatic_optimization = False
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b, opt_c) = self.optimizers(use_pl_optimizer=True)