2020-05-05 02:16:54 +00:00
.. testsetup :: *
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
2020-09-14 01:04:21 +00:00
.. _early_stopping:
2020-05-05 02:16:54 +00:00
2022-01-16 19:27:36 +00:00
##############
Early Stopping
##############
2020-02-11 04:55:22 +00:00
2020-10-08 09:49:56 +00:00
.. raw :: html
2020-10-08 19:54:52 +00:00
<video width="50%" max-width="400px" controls
2020-10-08 09:49:56 +00:00
poster="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/pl_docs/trainer_flags/yt_thumbs/thumb_earlystop.png"
src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/pl_docs/yt/Trainer+flags+19-+early+stopping_1.mp4"></video>
2022-01-16 19:27:36 +00:00
***** ***** ***** ***** ***
Stopping an Epoch Early
***** ***** ***** ***** ***
You can stop and skip the rest of the current epoch early by overriding :meth: `~pytorch_lightning.core.hooks.ModelHooks.on_train_batch_start` to return `` -1 `` when some condition is met.
If you do this repeatedly, for every epoch you had originally requested, then this will stop your entire training.
2020-04-08 12:38:53 +00:00
2022-01-16 19:27:36 +00:00
***** ***** ***** ***** **
EarlyStopping Callback
***** ***** ***** ***** **
2020-06-19 06:38:10 +00:00
2022-01-16 19:27:36 +00:00
The :class: `~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback can be used to monitor a metric and stop the training when no improvement is observed.
2020-05-25 17:33:00 +00:00
2020-09-30 12:31:16 +00:00
To enable it:
2020-05-05 02:16:54 +00:00
2020-10-07 17:01:50 +00:00
- Import :class: `~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback.
2022-05-11 18:21:11 +00:00
- Log the metric you want to monitor using :meth: `~pytorch_lightning.core.module.LightningModule.log` method.
2022-01-16 19:27:36 +00:00
- Init the callback, and set `` monitor `` to the logged metric of your choice.
- Set the `` mode `` based on the metric needs to be monitored.
2020-10-07 17:01:50 +00:00
- Pass the :class: `~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback to the :class: `~pytorch_lightning.trainer.trainer.Trainer` callbacks flag.
2020-09-30 12:31:16 +00:00
2020-10-07 17:01:50 +00:00
.. code-block :: python
2020-09-30 12:31:16 +00:00
2020-10-07 17:01:50 +00:00
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
2020-05-05 02:16:54 +00:00
2022-01-16 19:27:36 +00:00
class LitModel(LightningModule):
def validation_step(self, batch, batch_idx):
loss = ...
self.log("val_loss", loss)
2021-07-30 12:10:15 +00:00
2022-01-16 19:27:36 +00:00
model = LitModel()
trainer = Trainer(callbacks=[EarlyStopping(monitor="val_loss", mode="min")])
trainer.fit(model)
2020-05-05 02:16:54 +00:00
2021-04-20 23:23:36 +00:00
You can customize the callbacks behaviour by changing its parameters.
2020-02-11 04:55:22 +00:00
2021-01-26 09:44:54 +00:00
.. testcode ::
2020-02-11 04:55:22 +00:00
2021-07-30 12:10:15 +00:00
early_stop_callback = EarlyStopping(monitor="val_accuracy", min_delta=0.00, patience=3, verbose=False, mode="max")
2020-10-07 17:01:50 +00:00
trainer = Trainer(callbacks=[early_stop_callback])
2020-02-11 04:55:22 +00:00
2021-04-20 23:23:36 +00:00
Additional parameters that stop training at extreme points:
- `` stopping_threshold `` : Stops training immediately once the monitored quantity reaches this threshold.
It is useful when we know that going beyond a certain optimal value does not further benefit us.
- `` divergence_threshold `` : Stops training as soon as the monitored quantity becomes worse than this threshold.
2022-01-16 19:27:36 +00:00
When reaching a value this bad, we believes the model cannot recover anymore and it is better to stop early and run with different initial conditions.
- `` check_finite `` : When turned on, it stops training if the monitored metric becomes NaN or infinite.
- `` check_on_train_epoch_end `` : When turned on, it checks the metric at the end of a training epoch. Use this only when you are monitoring any metric logged within
training-specific hooks on epoch-level.
2021-04-20 23:23:36 +00:00
2020-10-07 20:10:41 +00:00
In case you need early stopping in a different part of training, subclass :class: `~pytorch_lightning.callbacks.early_stopping.EarlyStopping`
2020-05-25 17:33:00 +00:00
and change where it is called:
.. testcode ::
class MyEarlyStopping(EarlyStopping):
def on_validation_end(self, trainer, pl_module):
# override this to disable early stopping at the end of val loop
pass
def on_train_end(self, trainer, pl_module):
# instead, do it at the end of training loop
2022-01-16 19:27:36 +00:00
self._run_early_stopping_check(trainer)
2020-05-25 17:33:00 +00:00
.. note ::
2020-10-07 20:10:41 +00:00
The :class: `~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback runs
2022-01-16 19:27:36 +00:00
at the end of every validation epoch by default. However, the frequency of validation
can be modified by setting various parameters in the :class: `~pytorch_lightning.trainer.trainer.Trainer` ,
2020-05-25 17:33:00 +00:00
for example :paramref: `~pytorch_lightning.trainer.trainer.Trainer.check_val_every_n_epoch`
and :paramref: `~pytorch_lightning.trainer.trainer.Trainer.val_check_interval` .
2022-01-16 19:27:36 +00:00
It must be noted that the `` patience `` parameter counts the number of
validation checks with no improvement, and not the number of training epochs.
Therefore, with parameters `` check_val_every_n_epoch=10 `` and `` patience=3 `` , the trainer
2020-05-25 17:33:00 +00:00
will perform at least 40 training epochs before being stopped.