Add EarlyStop documentation (#245)
* Update Training Loop.md * Update index.md * Update README.md * Update Training Loop.md * Update Training Loop.md
This commit is contained in:
parent
eb268c4184
commit
3d16a686b3
|
@ -324,6 +324,7 @@ tensorboard --logdir /some/path
|
|||
|
||||
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
|
||||
- [Force training for min or max epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs)
|
||||
- [Early stopping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping)
|
||||
- [Force disable early stop](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop)
|
||||
- [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping)
|
||||
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)
|
||||
|
|
|
@ -19,6 +19,24 @@ It can be useful to force training for a minimum number of epochs or limit to a
|
|||
trainer = Trainer(min_nb_epochs=1, max_nb_epochs=1000)
|
||||
```
|
||||
|
||||
---
|
||||
#### Early stopping
|
||||
To enable ealry-stopping, define the callback and give it to the trainer.
|
||||
``` {.python}
|
||||
from pytorch_lightning.callbacks import EarlyStopping
|
||||
|
||||
# DEFAULTS
|
||||
early_stop_callback = EarlyStopping(
|
||||
monitor='val_loss',
|
||||
min_delta=0.00,
|
||||
patience=0,
|
||||
verbose=False,
|
||||
mode='auto'
|
||||
)
|
||||
|
||||
trainer = Trainer(early_stop_callback=early_stop_callback)
|
||||
```
|
||||
|
||||
---
|
||||
#### Force disable early stop
|
||||
Use this to turn off early stopping and run training to the [max_epoch](#force-training-for-min-or-max-epochs)
|
||||
|
|
|
@ -120,6 +120,7 @@ Notice a few things about this flow:
|
|||
|
||||
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
|
||||
- [Force training for min or max epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs)
|
||||
- [Early stopping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping)
|
||||
- [Force disable early stop](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop)
|
||||
- [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping)
|
||||
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)
|
||||
|
|
Loading…
Reference in New Issue