* ref: decouple apex second attemp part 4/n
* ref: decouple apex second attemp part 4/n
* Update lightning.py
* ref: decouple apex second attemp part 4/n
* make current_epoch and global_step to be same as trainer, after model restore.
* remove assignment here
* test
* minor modification
* merge with parent's master
* [bug-fix]: update trainer properties
* minor comment fix
* minor comment fix
* reset train loader in `on_train_epoch_start` hook
* makes sure the changes work
* minor chane
* update changelog
* adding unit test for reload_dataloaders_every_epoch arg
* modified changelog, to add PR number
* revert imports
* changes to unit test
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* videos in trainer api
* videos in docs
* videos in docs
* videos in trainer api
* videos in docs
* videos in docs
* videos in docs
* videos in docs
* Update new-project.rst
* docs
* Update new-project.rst
* added tests for multiple optimizers and dataloaders
* added tests for multiple optimizers and dataloaders
* added tests for multiple optimizers and dataloaders
* tpu device check
* replaced with xmp spawn
* Revert "replaced with xmp spawn"
This reverts commit 6835380f
* replaced all instances of XLA_AVAILABLE
* moved inner_f to global scope
* made refactors
* added changelog
* added TPU_AVAILABLE variable
* fix codefactor issues
* removed form trainer and early stopping
* add TORCHXLA_AVAILABLE check
* added tests
* refactoring
* Update pytorch_lightning/utilities/xla_device_utils.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* updated function names
* fixed bug
* updated CHANGELOG.md
* added todo
* added type hints
* isort and black
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Makes sure logging doesn't ever happen from non-root zero
* Makes sure logging doesn't ever happen from non-root zero
* Makes sure logging doesn't ever happen from non-root zero
* added bug report model
* fix local model
* fix local model
* fix local model
* fix local model
* make current_epoch and global_step to be same as trainer, after model restore.
* remove assignment here
* test
* minor modification
* Update pytorch_lightning/core/lightning.py
type check, better clarity
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* Update pytorch_lightning/core/lightning.py
type check, better clarity
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* comments for current_epoch and global_step properties
* Update tests/models/test_restore.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update comments according to the changes made
* Update tests/models/test_restore.py
* add current_epoch, global_step to jit ignore list
* Add comments to CHANGELOG
* Update CHANGELOG.md
* Update tests/models/test_restore.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* test selecting the correct backend. tem backends while slurm and TE are decoupled
* test selecting the correct backend. tem backends while slurm and TE are decoupled