Fix Configuring Learning Rate Schedulers (#1177)

* Update docs so users know the desired manner of configuring learning rate schedulers.

* update list

* as note

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
This commit is contained in:
authman 2020-03-19 08:22:29 -05:00 committed by GitHub
parent 01b8991c5a
commit 711892a0a2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 30 additions and 17 deletions

View File

@ -19,6 +19,20 @@ Every optimizer you use can be paired with any `LearningRateScheduler <https://p
def configure_optimizers(self): def configure_optimizers(self):
return [Adam(...), SGD(...)], [ReduceLROnPlateau(), LambdaLR()] return [Adam(...), SGD(...)], [ReduceLROnPlateau(), LambdaLR()]
# Same as above with additional params passed to the first scheduler
def configure_optimizers(self):
optimizers = [Adam(...), SGD(...)]
schedulers = [
{
'scheduler': ReduceLROnPlateau(mode='max', patience=7),
'monitor': 'val_recall', # Default: val_loss
'interval': 'epoch',
'frequency': 1
},
LambdaLR()
]
return optimizers, schedulers
Use multiple optimizers (like GANs) Use multiple optimizers (like GANs)
------------------------------------- -------------------------------------

View File

@ -942,28 +942,27 @@ class LightningModule(ABC, GradInformation, ModelIO, ModelHooks):
dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch
return [gen_opt, dis_opt], [gen_sched, dis_sched] return [gen_opt, dis_opt], [gen_sched, dis_sched]
Some things to know .. note:: Some things to note:
- Lightning calls ``.backward()`` and ``.step()`` on each optimizer - Lightning calls ``.backward()`` and ``.step()`` on each optimizer
and learning rate scheduler as needed. and learning rate scheduler as needed.
- If you use 16-bit precision (``precision=16``), Lightning will automatically - If you use 16-bit precision (``precision=16``), Lightning will automatically
handle the optimizers for you. handle the optimizers for you.
- If you use multiple optimizers, training_step will have an additional ``optimizer_idx`` parameter.
- If you use multiple optimizers, training_step will have an additional - If you use LBFGS lightning handles the closure function automatically for you.
``optimizer_idx`` parameter.
- If you use LBFGS lightning handles the closure function automatically for you
- If you use multiple optimizers, gradients will be calculated only - If you use multiple optimizers, gradients will be calculated only
for the parameters of current optimizer at each training step. for the parameters of current optimizer at each training step.
- If you need to control how often those optimizers step or override the - If you need to control how often those optimizers step or override the
default .step() schedule, override the `optimizer_step` hook. default .step() schedule, override the `optimizer_step` hook.
- If you only want to call a learning rate scheduler every `x` step or epoch, - If you only want to call a learning rate scheduler every `x` step or epoch,
you can input this as 'frequency' key: dict(scheduler=lr_scheduler, or want to monitor a custom metric, you can specify these in a dictionary:
interval='step' or 'epoch', frequency=x) .. code-block:: python
{
'scheduler': lr_scheduler,
'interval': 'step' # or 'epoch'
'monitor': 'val_f1',
'frequency': x
}
""" """
return Adam(self.parameters(), lr=1e-3) return Adam(self.parameters(), lr=1e-3)