Update `configure_optimizers` docs (#7390)

* Update `configure_optimizers` docs

* Update pytorch_lightning/core/lightning.py
This commit is contained in:
Carlos Mocholí 2021-05-06 10:39:01 +02:00 committed by GitHub
parent 651f93a69f
commit 6ad05d3338
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 18 additions and 39 deletions

View File

@ -1123,6 +1123,24 @@ class LightningModule(
- **Tuple of dictionaries** as described above, with an optional ``"frequency"`` key.
- **None** - Fit will run without any optimizer.
Note:
The lr_dict is a dictionary which contains the scheduler and its associated configuration.
The default configuration is shown below.
.. code-block:: python
lr_dict = {
'scheduler': lr_scheduler, # The LR scheduler instance (required)
# The unit of the scheduler's step size, could also be 'step'
'interval': 'epoch',
'frequency': 1, # The frequency of the scheduler
'monitor': 'val_loss', # Metric for `ReduceLROnPlateau` to monitor
'strict': True, # Whether to crash the training if `monitor` is not found
'name': None, # Custom name for `LearningRateMonitor` to use
}
Only the ``"scheduler"`` key is required, the rest will be set to the defaults above.
Note:
The ``frequency`` value specified in a dict along with the ``optimizer`` key is an int corresponding
to the number of sequential batches optimized with the specific optimizer.
@ -1148,33 +1166,6 @@ class LightningModule(
If an LR scheduler is specified for an optimizer using the ``lr_scheduler`` key in the above dict,
the scheduler will only be updated when its optimizer is being used.
Note:
The lr_dict is a dictionary which contains the scheduler and its associated configuration.
The default configuration is shown below.
.. code-block:: python
lr_dict = {
'scheduler': lr_scheduler, # The LR scheduler instance (required)
'interval': 'epoch', # The unit of the scheduler's step size
'frequency': 1, # The frequency of the scheduler
'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler
'monitor': 'val_loss', # Metric for ReduceLROnPlateau to monitor
'strict': True, # Whether to crash the training if `monitor` is not found
'name': None, # Custom name for LearningRateMonitor to use
}
Only the ``"scheduler"`` key is required, the rest will be set to the defaults above.
Note:
The ``"frequency"`` value is an ``int`` corresponding to the number of sequential batches optimized with the
specific optimizer. It should be given to none or to all of the optimizers.
There is a difference between passing multiple optimizers in a list and passing multiple optimizers in
dictionaries with a frequency of 1:
In the former case, all optimizers will operate on the given batch in each optimization step.
In the latter, only one optimizer will operate on the given batch at every step.
Examples::
# most cases
@ -1226,18 +1217,6 @@ class LightningModule(
at each training step.
- If you need to control how often those optimizers step or override the default ``.step()`` schedule,
override the :meth:`optimizer_step` hook.
- If you only want to call a learning rate scheduler every ``x`` step or epoch, or want to monitor a custom
metric, you can specify these in a lr_dict:
.. code-block:: python
lr_dict = {
'scheduler': lr_scheduler,
'interval': 'step', # or 'epoch'
'monitor': 'val_f1',
'frequency': x,
}
"""
rank_zero_warn("`configure_optimizers` must be implemented to be used with the Lightning Trainer")