* Fix.
* Fix#2550: allow to load model from checkpoint if self.save_hyperparameters() was not called.
* Fix? Cleaner way of not calling self.save_hyperparameters in EvalModelTemplate.
* Fix? `_load_model_state` cleanup
* Fix?
* Fix#2550: allow to load model from checkpoint if self.save_hyperparameters() was not called.
* Fix.
* Fix? Cleaner way of not calling self.save_hyperparameters in EvalModelTemplate.
* Fix? `_load_model_state` cleanup
* Fixed side effect in `test_load_model_from_checkpoint_extra_args`.
* Apply suggestions from code review
* fix
* try
* fixed missing arg in evalmodel
* fixed missing arg in evalmodel
* fix
* update
* fix loading
* add test
* prune
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* make current_epoch and global_step to be same as trainer, after model restore.
* remove assignment here
* test
* minor modification
* Update pytorch_lightning/core/lightning.py
type check, better clarity
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* Update pytorch_lightning/core/lightning.py
type check, better clarity
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* comments for current_epoch and global_step properties
* Update tests/models/test_restore.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update comments according to the changes made
* Update tests/models/test_restore.py
* add current_epoch, global_step to jit ignore list
* Add comments to CHANGELOG
* Update CHANGELOG.md
* Update tests/models/test_restore.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* test selecting the correct backend. tem backends while slurm and TE are decoupled
* test selecting the correct backend. tem backends while slurm and TE are decoupled
* Fix val_progress_bar total with num_sanity_val_steps
* chlog
* Fix val_progress_bar total with num_sanity_val_steps
* move test
* replaced with sanity flag and suggestions
* ref: finish #3733
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* remove deprecated test
* Update pytorch_lightning/accelerators/ddp_backend.py
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* remove deprecated test
* remove deprecated test
* remove deprecated test
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* ref: fix metric err
* ref: fix metric err
* ref: fix metric err
* ref: merge
* ref: merge
* ref: merge
* ref: merge
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* fix topk=-1 tracking best
* update test
* clean up
* add changelog
* enable loading best topk in trainer.test()
* make trivial
* return right away
* make windows test path happy
* ref: test val epoch end
* ref: test val epoch end
* ref: test val epoch end
* ref: test log dict
* ref: test log dict
* ref: test log dict
* ref: test log dict
* Split out changes from #3563 to make that PR easier to review. This formats the file according to the Black formatter
* Store a reference to the trainer on the datamodule
Fixes#3682
* Update data_connector.py
* Update data_connector.py
* Update test_datamodules.py
* Split out changes from #3563 to make that PR easier to review. This formats the file according to the Black formatter
* support checkpoint hooks for datamodule
refactor on_{save/load}_checkpoint to a separate hook class that both the lightning module and data module inherit
add spots in callback connector to call new datamodule hooks if available
* hooks formatting
* Update hooks.py
* Update checkpoint_connector.py
* Update lightning.py
* update based on upstream/master
checkout upstream/master
* Update checkpoint_connector.py
* add tests
* undo format revert
* Updated CHANGELOG.md
* add checkpoint hooks
* add Dict type
* import CheckpointHooks
* Add a more direct test of multi-gpu training working
* Update tests/base/develop_pipelines.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* ref: test val epoch end
* ref: test val epoch end
* ref: test val epoch end
* ref: test val epoch end
* ref: test val epoch end
* ref: test val epoch end
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: test val epoch end
* ref: test val epoch end