2019-07-28 12:13:40 +00:00
The lightning training loop handles everything except the actual computations of your model. To decide what will happen in your training loop, define the [training_step function ](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#training_step ).
2019-06-27 16:13:55 +00:00
Below are all the things lightning automates for you in the training loop.
2019-06-27 15:22:13 +00:00
---
#### Accumulated gradients
Accumulated gradients runs K small batches of size N before doing a backwards pass. The effect is a large effective batch size of size KxN.
``` {.python}
2019-06-27 15:59:27 +00:00
# DEFAULT (ie: no accumulated grads)
2019-06-27 15:22:13 +00:00
trainer = Trainer(accumulate_grad_batches=1)
```
2019-06-27 15:27:11 +00:00
---
#### Force training for min or max epochs
It can be useful to force training for a minimum number of epochs or limit to a max number
``` {.python}
2019-06-27 15:59:27 +00:00
# DEFAULT
2019-06-27 15:27:11 +00:00
trainer = Trainer(min_nb_epochs=1, max_nb_epochs=1000)
```
2019-06-27 15:59:27 +00:00
2019-09-25 18:52:40 +00:00
---
#### Early stopping
2019-09-26 19:04:57 +00:00
To enable early-stopping, define the callback and give it to the trainer.
2019-09-25 18:52:40 +00:00
``` {.python}
from pytorch_lightning.callbacks import EarlyStopping
# DEFAULTS
early_stop_callback = EarlyStopping(
monitor='val_loss',
min_delta=0.00,
patience=0,
verbose=False,
mode='auto'
)
trainer = Trainer(early_stop_callback=early_stop_callback)
```
2019-06-27 17:29:01 +00:00
---
#### Force disable early stop
Use this to turn off early stopping and run training to the [max_epoch ](#force-training-for-min-or-max-epochs )
``` {.python}
# DEFAULT
trainer = Trainer(enable_early_stop=True)
```
2019-06-28 22:00:57 +00:00
---
2019-08-09 19:02:14 +00:00
#### Gradient Clipping
Gradient clipping may be enabled to avoid exploding gradients.
Specifically, this will [clip the gradient norm computed over all model parameters *together* ](https://pytorch.org/docs/stable/nn.html#torch.nn.utils.clip_grad_norm_ ).
2019-06-28 22:00:57 +00:00
``` {.python}
# DEFAULT (ie: don't clip)
2019-09-25 23:05:06 +00:00
trainer = Trainer(gradient_clip_val=0)
2019-06-28 22:00:57 +00:00
2019-08-09 19:02:14 +00:00
# clip gradients with norm above 0.5
2019-09-25 23:05:06 +00:00
trainer = Trainer(gradient_clip_val=0.5)
2019-08-09 19:02:14 +00:00
```
2019-06-28 22:00:57 +00:00
2019-06-27 15:59:27 +00:00
---
#### Inspect gradient norms
Looking at grad norms can help you figure out where training might be going wrong.
``` {.python}
# DEFAULT (-1 doesn't track norms)
trainer = Trainer(track_grad_norm=-1)
# track the LP norm (P=2 here)
trainer = Trainer(track_grad_norm=2)
```
---
#### Set how much of the training set to check
2019-08-17 14:23:25 +00:00
If you don't want to check 100% of the training set (for debugging or if it's huge), set this flag.
train_percent_check will be overwritten by overfit_pct if `overfit_pct > 0`
2019-06-27 15:59:27 +00:00
``` {.python}
# DEFAULT
trainer = Trainer(train_percent_check=1.0)
# check 10% only
trainer = Trainer(train_percent_check=0.1)
```