lightning/pytorch_lightning/loops
Adrian Wälchli 2b0075a47e
Teardown sync-batchnorm after training (#11078)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-12-16 18:58:44 +00:00
..
batch Simplify several profile calls (#11031) 2021-12-14 19:49:19 +00:00
dataloader Follow-up changes to #10575 (#10957) 2021-12-07 15:27:52 +01:00
epoch Simplify several profile calls (#11031) 2021-12-14 19:49:19 +00:00
optimization Follow-up changes to #10575 (#10957) 2021-12-07 15:27:52 +01:00
__init__.py Loop and test restructuring (#9383) 2021-09-10 13:18:24 +00:00
base.py Teardown sync-batchnorm after training (#11078) 2021-12-16 18:58:44 +00:00
fit_loop.py Remove `_call_accelerator_hook` Trainer method (#10999) 2021-12-09 02:27:13 +01:00
utilities.py added UserWarnings if max_epochs not set in the Trainer class (#10700) 2021-12-06 09:44:25 +00:00