lightning/pytorch_lightning/trainer
simran2905 d1efae2e47
Fix checkpointed state for lr_schedulers with step interval (#7877)
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-06-21 15:08:07 +00:00
..
connectors Fix checkpointed state for lr_schedulers with step interval (#7877) 2021-06-21 15:08:07 +00:00
__init__.py
callback_hook.py Add `warning_cache.deprecation` and set warning stacklevel [1/2] (#8005) 2021-06-18 11:50:24 +00:00
configuration_validator.py [feat] Allow overriding optimizer_zero_grad and/or optimizer_step when using accumulate_grad_batches (#7980) 2021-06-17 12:50:37 +02:00
data_loading.py Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
deprecated_api.py Prune deprecated trainer attributes (#7501) 2021-05-12 20:10:15 +00:00
evaluation_loop.py `is_overridden` improvements (#7918) 2021-06-11 13:47:00 +02:00
logging.py Move metrics_to_scalars to a dedicated utilities file (#7180) 2021-04-24 10:25:33 +01:00
model_hooks.py Deprecate TrainerModelHooksMixin (#7422) 2021-05-07 13:19:36 -07:00
optimizers.py
predict_loop.py Improve `LightningDataModule` hook test and fix `dataloader_idx` argument (#7941) 2021-06-14 12:42:13 +00:00
progress.py [2/N] Define dataclasses for progress tracking (#7574) 2021-05-22 03:09:08 +02:00
properties.py Merge pull request #7990 from PyTorchLightning/refactor/loops/loops_everywhere_eval 2021-06-18 08:54:59 -04:00
states.py `TrainerState` refactor [5/5] (#7173) 2021-05-04 12:50:56 +02:00
supporters.py Bugfix/Multiple dataloaders (#7433) 2021-05-11 16:33:29 +02:00
trainer.py Merge pull request #7990 from PyTorchLightning/refactor/loops/loops_everywhere_eval 2021-06-18 08:54:59 -04:00
training_tricks.py Use `torch.nn.utils.clip_grad_norm_` and add `clip_grad_by_value` support for TPU (#7025) 2021-05-07 16:41:39 +00:00