updated early stopping docs (#1410)

* remove incorrect comment in training_step

* added comment for on_batch_start in hooks.py

* update early stopping docs

* typo fix

* whitespace fix

* Apply suggestions from code review

* Update docs/source/early_stopping.rst

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
This commit is contained in:
david-alexander-white 2020-04-08 05:38:53 -07:00 committed by GitHub
parent ddbf7de6dc
commit d8cbf8d60c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 15 additions and 10 deletions

View File

@ -1,15 +1,21 @@
Early stopping Early stopping
============== ==============
Default behavior Stopping an epoch early
---------------- -----------------------
You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightning.LightningModule.on_batch_start` to return `-1` when some condition is met.
If you do this repeatedly, for every epoch you had originally requested, then this will stop your entire run.
Default Epoch End Callback Behavior
-----------------------------------
By default early stopping will be enabled if `'val_loss'` By default early stopping will be enabled if `'val_loss'`
is found in :meth:`~pytorch_lightning.core.lightning.LightningModule.validation_epoch_end`'s is found in :meth:`~pytorch_lightning.core.lightning.LightningModule.validation_epoch_end`'s
return dict. Otherwise training will proceed with early stopping disabled. return dict. Otherwise training will proceed with early stopping disabled.
Enable Early Stopping Enable Early Stopping using Callbacks on epoch end
--------------------- --------------------------------------------------
There are two ways to enable early stopping. There are two ways to enable early stopping using callbacks on epoch end.
.. doctest:: .. doctest::
@ -39,8 +45,8 @@ is not defined.
.. seealso:: .. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer` :class:`~pytorch_lightning.trainer.trainer.Trainer`
Disable Early Stopping Disable Early Stopping with callbacks on epoch end
---------------------- --------------------------------------------------
To disable early stopping pass ``False`` to the To disable early stopping pass ``False`` to the
:paramref:`~pytorch_lightning.trainer.trainer.Trainer.early_stop_callback`. :paramref:`~pytorch_lightning.trainer.trainer.Trainer.early_stop_callback`.
Note that ``None`` will not disable early stopping but will lead to the Note that ``None`` will not disable early stopping but will lead to the

View File

@ -51,6 +51,8 @@ class ModelHooks(torch.nn.Module):
def on_batch_start(self, batch: Any) -> None: def on_batch_start(self, batch: Any) -> None:
"""Called in the training loop before anything happens for that batch. """Called in the training loop before anything happens for that batch.
If you return -1 here, you will skip training for the rest of the current epoch.
:param batch: :param batch:
""" """
# do something when the batch starts # do something when the batch starts

View File

@ -221,9 +221,6 @@ class LightningModule(ABC, GradInformation, ModelIO, ModelHooks):
"hiddens": hiddens # remember to detach() this "hiddens": hiddens # remember to detach() this
} }
You can also return a -1 instead of a dict to stop the current loop. This is useful
if you want to break out of the current training epoch early.
Notes: Notes:
The loss value shown in the progress bar is smoothed (averaged) over the last values, The loss value shown in the progress bar is smoothed (averaged) over the last values,
so it differs from the actual loss returned in train/validation step. so it differs from the actual loss returned in train/validation step.