* Add and fix the docs of BackboneLambdaFinetuningCallback
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* add test
* resolve bug
* udpate test
* wrongly copy / paste
* update test
* resolve a second bug
Co-authored-by: Ubuntu <ubuntu@ip-172-31-62-109.ec2.internal>
* disable version if not required
* disable version if not required
* pep
* chlog
* improve test
* improve test
* parametrize test and update del_list
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* try appending version to already saved ckpt_file
* Revert "try appending version to already saved ckpt_file"
This reverts commit 710e05e01f738d982aabf1f36c09fa59293e5c0c.
* add more assertions
* use BoringModel
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
* annotate all unused vars
* rank_zero_warn
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* f1 fixed
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Do not warn when the name key is used
* Missing line
* Consistency
* Update pytorch_lightning/callbacks/lr_monitor.py
* Update docs
* Update pytorch_lightning/core/lightning.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Update CHANGELOG
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Added changes for RPC plugin
* Add missing kwargs
* Fix code format
* Loading refactors by introducing is_distributed var, fix optimizer step flow
* Add rpc guard
* Added docstrings and typing
* resolve comments
* Add additional rpc hook, refactor name of exit process hook for clarity
* remove annotation
* Modify behaviour to allow optional return, add test for rpc plugin
* resolve tests
* rename is_ddp_based
* update
* update for windows
* update
* resolve test
* code smell
* Revert back to init_ddp_connection for backwards compat
* Swap to explicit name for property
* Add missing speed parity increase for CI variability, fix call counts for child process
Co-authored-by: tchaton <thomas@grid.ai>
* Added changeable extension variable for model checkpoints
* Removed whitespace
* Removed the last bit of whitespace
* Wrote tests for FILE_EXTENSION
* Fixed formatting issues
* More formatting issues
* Simplify test by just using defaults
* Formatting to PEP8
* Added dummy class that inherits ModelCheckpoint; run only one batch instead of epoch for integration test
* Fixed too much whitespace formatting
* some changes
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
* true final value of global step
* ch check
* tests
* save each validation interval
* wip
* add test
* add test
* wip
* fix tests, revert old edits, fix merge conflicts, update doctests
* test + bugfix
* sort files
* format test
* suggestion by ananth
* added changelog
* naming
* docs
* example
* suggestion
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fix test
* pep
* pep
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Add dirpath and filename parameter in ModelCheckpoint
* remove old function
* chlog
* codefactor
* update tests
* docs
* fix doctest and added tests
* pathlib dirpath
* dep version and docs
* try fix doctest
* pep
* suggestions
Co-authored-by: carmocca <carlossmocholi@gmail.com>
* suggestions
* fix test
* pep
* trigger tests
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* suggestions
* try fix windows test
* add and update some tests
* trigger tests
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* limit monitor callback with row_log_interval
* try fix gpu test
* log_every_n_steps
* Apply suggestions from code review
* Apply suggestions from code review
* rebase and staticmethod
* suggestions
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
* Fix info message when EarlyStopping 'mode' not provided
* fixup! Fix info message when EarlyStopping 'mode' not provided
* Apply suggestions from code review
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
* tpu device check
* replaced with xmp spawn
* Revert "replaced with xmp spawn"
This reverts commit 6835380f
* replaced all instances of XLA_AVAILABLE
* moved inner_f to global scope
* made refactors
* added changelog
* added TPU_AVAILABLE variable
* fix codefactor issues
* removed form trainer and early stopping
* add TORCHXLA_AVAILABLE check
* added tests
* refactoring
* Update pytorch_lightning/utilities/xla_device_utils.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* updated function names
* fixed bug
* updated CHANGELOG.md
* added todo
* added type hints
* isort and black
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Fix val_progress_bar total with num_sanity_val_steps
* chlog
* Fix val_progress_bar total with num_sanity_val_steps
* move test
* replaced with sanity flag and suggestions
* ref: fix metric err
* ref: fix metric err
* ref: fix metric err
* ref: merge
* ref: merge
* ref: merge
* ref: merge
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* fix topk=-1 tracking best
* update test
* clean up
* add changelog
* enable loading best topk in trainer.test()
* make trivial
* return right away
* make windows test path happy
* topk default
* fix test that doesn't have best available
* remove print
* #3680 changes
* fix backward
* temp revert
te
* add warning by carmocca
* format docstring for test
* specify monitor in ES test with top k
* improve docstring for save_last
* remove commented lines
* revert passing model to test
* undo regex mistake
* changelog
* fix test covering case monitor=None and savetopk=-1
* docstring
* fix test for saving all checkpoints
* don't save checkpoints for save_top_k=0
* add test for savetopk=0
Co-authored-by @carmocca
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* clarify forward
* clarify forward
* clarify forward
* clarify forward
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* force crash when max_epochs < epochs in a checkpoint
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* Update gradient_accumulation_scheduler.py
add types for gradient accumulation scheduler callback
* Apply black formatting to model checkpoint callback
auto-format, no other changes
* Update gradient_accumulation_scheduler.py
drop other changes
* Add type hints to model checkpoint callback
* Update model_checkpoint.py
remove trainer/lightning modules types to avoid circular import
* Fix ModelCheckpoint's name formatting
* Fix failing tests
* Add dot to CHECKPOINT_SUFFIX
* Set variables to their default values at the end of tests
* Fix logic for filepath='' and filename=None. Add test
* Fix Windows tests
* Fix typo. Remove leading line break and zeroes
* Remove CHECKPOINT_SUFFIX
* Fix typos. Use appropriate f-string format
* Apply suggestions from code review
* Fix broken tests after #3320
* Finish changes suggested by Borda
* Use explicit test var names
* Apply suggestions
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Apply suggestions
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Update CHANGELOG
* Apply suggestions from code review
* for
* prepend whitespace in warn msg
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Fixes the test for early stopping without val step.
The expression which checked, if early stopping was triggered, had an off-by-one error and hence was true even if early stopping was not triggered.
Furthermore set patience to 0 and max epochs to 10, to ensure loss has enough time to flatten.
* Fixes early stopping without val step.
The issue has been, that only `early_stop_on` key was checked and not an arbitrary monitor key.
* Fixes branch, which checks whether early stopping is done during validation.
Before only `val_early_stop_on` was checked. Since arbitrary keys can be used, the set of possible validation keys cannot be exhaustive. Hence this disables "early stopping on_train_epoch_end" via an instance attribute if early stopping was executed in on_validation_epoch_end.
Furthermore adds a test, which ensures arbitrary keys work.
* Improve check whether eval results are used.
Only disable early checking with train results if eval results are actually used. Before they were always disabled in ``on_validation_epoch_end``.
Rename and document instance variable, to make it more clear.
* Remove wrong documentation on behaviour of early stopping with train result' dict.
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* change t() to transpose() as xla devices do not support .t() on 1-dim tensor
* detach tensor before copying
* Revert "detach tensor before copying"
This reverts commit 37cc7bbe
* changed dims
* added test_result_obj_on_tpu
* detach before copying
* detach before copying
* detach before copying
* replace torch.cat with sum
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* Fix typo
* ref: group prepare data hook (6) (#3212)
* group prepare data hook
* group prepare data hook
* group prepare data hook
* group prepare data hook
* group prepare data hook
* group prepare data hook
* group prepare data hook
* Fix typo
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Fix num_sanity_val_steps according to limit_val_steps
* fix test
* add num_sanity_batches
* pep
* update docstring in test
* add more test
* chlog
* update comments and docstring in test
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Adrian Wälchli <adrian.waelchli@inf.unibe.ch>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* added warning when changing monitor and using results obj
* added warning when changing monitor and using results obj
* added warning when changing monitor and using results obj
* fix missing return statement. Do not normalize remote paths
* Update pytorch_lightning/utilities/cloud_io.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Add some documentation that we now support s3 and hdfs paths
* suggestion from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Update lr_logger.py
when logging learning_rate, we should provide different choices to log including 'step' and 'epoch'
* Update lr_logger.py
add some type annotations and docstrings
* Update lr_logger.py
fixed a bug where `on_train_batch_start()` can't be triggered, instead, we should use on_batch_start(); add `interval` args so that we can record learning_rates with respect to `global_step` or `current_epoch`.
* Update lr_logger.py
restore _extract_lr()
* suggestion
* Update lr_logger.py
modify _extract_lr(), it no more need to pass `interval` parameter.
* Update test_lr_logger.py
SkafteNicki 's suggetion
* log_interval now supports `None`, `step`, `epoch`
* change `log_interval` to `logging_interval`
* Update test_lr_logger.py
* Update lr_logger.py
* put types check into `on_train_start()`
* cleanup
* docstring typos
* minor changes from suggestions
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
commit 29fb0506cd38a15c359e369cc8bc4435916b0c78
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 19:35:30 2020 +0000
fix checking for version for docs to build
commit 467fd640db02275972c7111af031c86bb59333e9
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:56:05 2020 +0000
remove no local test
commit a7cc9f88de00feec1a5406874d05313c42bd004c
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:46:44 2020 +0000
fix
commit 3fdbb729da79ae9348c83410a138666bad467951
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:23:30 2020 +0000
revert requirements
commit 9b8686bd83e2bc243cf329e26f1c667c6949cf67
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:16:42 2020 +0000
make it a fixture
commit eec74953d24c8b25268d3b6dde3cc4affdd5cb8f
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:01:32 2020 +0000
fix up the testing
commit 896d94a0e60083d52c81db2a036b7f1e015cad11
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 17:47:28 2020 +0000
fix some tests
commit 6d22bde19767bf2b71dfd44839b01efdf6888f83
Merge: 6175d4e2 6ebe0d72
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 10:20:47 2020 +0000
Merge remote-tracking branch 'origin/master' into tb_use_gfile
commit 6175d4e26b15a43c412c26d501762cd0b570616a
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Fri Aug 7 10:16:36 2020 +0000
Use tensorboard.compat.gfile to support remote writing
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* added tests
* added tests
* added tests
* update rtol
* Revert "update rtol"
This reverts commit 4320329540.
* added tests
Co-authored-by: William Falcon <waf2107@columbia.edu>
* fix weights_save path and drop ckpt_path
* add tests
* unused import
* update docs
* changelog
* pep8
* fix horovod test
* make backward compatible
* perform same test for all loggers
* fix for when logger=False and weights_save_path is set
* update changelog
* update docs
* update tests
* do not set save dir dynamically
* remove duplicate test
* remove duplicated tests
* update tests
* update tests
* remove remaining ckpt_path references
* move defaults to init as suggested by @Borda
* test deprecation
* r
* r
* r
* patched optimizer closure with sr
* patched optimizer closure with sr
* patched optimizer closure with sr
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added autoreduce for train step
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added hooks
* added hooks
* added hooks
* added hooks
* added hooks
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* cache
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
* Update pytorch_lightning/core/step_result.py
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* simple
* finished tests for structured results on train epoch
* simple
* simple
* revert
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update tests/base/deterministic_model.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* finished tests for structured results on train epoch
* docstring typos
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update pytorch_lightning/core/step_result.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update pytorch_lightning/overrides/data_parallel.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* mlflow rework
* logger save_dir
* folder
* mlflow
* simplify
* fix test
* add a test for file dir contents
* new line
* changelog
* docs
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* test for comet logger
* improve mlflow checkpoint test
* prevent commet logger error on pytest exit
* test tensorboard save dir structure
* wandb save dir test
* skip test on windows
* add mlflow to pickle tests
* wandb
* code factor
* remove unused imports
* remove unused setter
* wandb mock
* wip mock
* wip mock
* wandb tests with mocking
* clean up
* clean up
* comments
* include wandblogger in test
* clean up
* missing argument
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* fix and test for ddp block logging rank > 0
* rename
* use the dummy logger
* dummy logger test
* set the logger in model
* decorator for rank zero experiment
* simplify check
* simplify
* fix problem with None in checkpoint path
* revert configure logger
* unused import
* offline
* try rank 0 decorator in checkpoint
* try fix test
* imgs
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* fix tpu tests
* fix tpu tests
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Adding importing ipywidgets before importing tqdm.auto to make sure ipywidgets is installed.
* Updated CHANGELOG.md
* Updated ipywidgets importing checks to @awaelchli comments.
Co-authored-by: William Falcon <waf2107@columbia.edu>
* add state_dict for early stopping
* move best attr after monitor_op defined
* improve early stopping and model checkpoint callbacks
* fix formatting
* fix attr init order
* clean up setting of default_root_dir attr
* logger needs default root dir set first
* reorg trainer init
* remove direct references to checkpoint callback
* more fixes
* more bugfixes
* run callbacks at epoch end
* update tests to use on epoch end
* PR cleanup
* address failing tests
* refactor for homogeneity
* fix merge conflict
* separate tests
* tests for early stopping bug regressions
* small fixes
* revert model checkpoint change
* typo fix
* fix tests
* update train loop
* cannot pass an int as default_save_path
* refactor log message
* fix test case
* appease the linter
* fix some doctests
* move config to callback
* fixes from rebase
* fixes from rebase
* chlog
* docs
* reformat
* formatting
* fix
* fix
* fixes from rebase
* add new test for patience
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/callbacks/test_early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix formatting
* remove enable_early_stop attribute
* add state_dict for early stopping
* move best attr after monitor_op defined
* improve early stopping and model checkpoint callbacks
* fix formatting
* fix attr init order
* clean up setting of default_root_dir attr
* logger needs default root dir set first
* reorg trainer init
* remove direct references to checkpoint callback
* more fixes
* more bugfixes
* run callbacks at epoch end
* update tests to use on epoch end
* PR cleanup
* address failing tests
* refactor for homogeneity
* fix merge conflict
* separate tests
* tests for early stopping bug regressions
* small fixes
* revert model checkpoint change
* typo fix
* fix tests
* update train loop
* fix test case
* appease the linter
* fix some doctests
* move config to callback
* fixes from rebase
* fixes from rebase
* chlog
* docs
* reformat
* formatting
* fix
* fix
* fixes from rebase
* add new test for patience
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/callbacks/test_early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix formatting
* remove enable_early_stop attribute
* fix test with new epoch indexing
* fix progress bar totals
* fix off by one error (see #2289) epoch starts at 0 now
* added missing imports
* fix hpc_save folderpath
* fix formatting
* fix tests
* small fixes from a rebase
* fix
* tmpdir
* tmpdir
* tmpdir
* wandb
* fix merge conflict
* add back evaluation after training
* test_resume_early_stopping_from_checkpoint TODO
* undo the horovod check
* update changelog
* remove a duplicate test from merge error
* try fix dp_resume test
* add the logger fix from master
* try remove default_root_dir
* try mocking numpy
* try import numpy in docs test
* fix wandb test
* pep 8 fix
* skip if no amp
* dont mock when doctesting
* install extra
* fix the resume ES test
* undo conf.py changes
* revert remove comet pickle from test
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update weights_loading.rst
* Update weights_loading.rst
* Update weights_loading.rst
* renamed flag
* renamed flag
* revert the None check in logger experiment name/version
* add the old comments
* _experiment
* test chckpointing on DDP
* skip the ddp test on windows
* cloudpickle
* renamed flag
* renamed flag
* parentheses for clarity
* apply suggestion max epochs
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <jtjordan@ncsu.edu>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Revert "deprecated: epoch indexing from 1 (#2206)"
This reverts commit f94b919b
* chlog
* grad index
* Apply suggestions from code review
* tests
* fix
* test
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* chlog
* deprecated
* deprecated
* deprecated
* tests
* tests
* Apply suggestions from code review
* tests
* hydra support
* tests
* hydra support
* hydra support
* hydra support
* tests
* typo
* typo
* Update test_dataloaders.py
* docs
* docs
* docs
* docs
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fixed percent check for val/test
* fixed percent check for val/test
* fixed percent check for val/test
* fixed percent check for val/test
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* add on fit_start on fit_end hooks
* add on fit_start on fit_end hooks
* add on fit_start on fit_end hooks
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Patch for issue 1815, which will allow EarlyStopping to work on precision=16
* Added a whitespace to the end of the line so CICD can rerun. No reason for the latest macos test to have been cancelled.
* Format.
* Add an additional attribute to ModelCheckpoint to keep track of the best model's path
Currently, only the best metric value is directly tracked. This new attribute will help in uses cases where the trained model needs to be used or tracked right after training.
* Add small description and usage example to docs
* Fix PEP8 issues
* Fix doctest example
* Fix expected output in doctest
* Apply suggestions from code review
* Show example as code block instead of doctest
* Apply suggestions from code review
* Update CHANGELOG.md
* Rename `ModelCheckpoint.best` to `ModelCheckpoint.best_model_score`
Also rename `ModelCheckpoint.best_model` (added in this PR) to `ModelCheckpoint.best_model_path`, for consistency, and `kth_best_model` to `kth_best_model_path`.
* Update pytorch_lightning/trainer/training_io.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Add warning when loading checkpoint from an old version
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fixed undesired behaviour due to dict.fromkeys
* a test for log length consistency
* runtime-warn if no schedulers are configured
* chlog
* move
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
* FixesPyTorchLightning/pytorch-lightning#490
`EarlyStopping` should check the metric of interest `on_validation_end` rather than `on_epoch_end`.
In a normal scenario, this does not cause a problem, but in combination with `check_val_every_n_epoch>1` in the `Trainer` it results in a warning or in a `RuntimeError` depending on `strict`.
* Highlighted that ES callback runs on val epochs in docstring
* Updated EarlyStopping in rst doc
* Update early_stopping.py
* Update early_stopping.rst
* Update early_stopping.rst
* Update early_stopping.rst
* Update early_stopping.rst
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update docs/source/early_stopping.rst
* fix doctest indentation warning
* Train loop calls early_stop.on_validation_end
* chlog
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
* saves model every epoch
* implement test for save_last
* Update CHANGELOG.md
* Update CHANGELOG.md
* changes test description
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
* Add flag to `dump_checkpoint` for only including weights
`ModelCheckpoint` then passes `self.save_weights_only` to the save function.
* Fix tests and add changelog entry
* Add check and descriptive message when training state is restored from a weights only checkpoint
Also add a test for making sure `ModelCheckpoint.save_weights_only` works as expected.
* Fix weights-only test to properly match expected exception
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* squash and rebase
sanity check hooks
sanity check callback hook finish
moved core progress bar functionality into callback
wip
remove duplicate merge
clean up
imports
docs
sanity check progress bar main
sanity
move callback calls
init progrss bar callback
configuration and docs
changelog
rate decorator
pass process_position
disable on rank > 0
position index
is_enabled
remove decorator
refactor init tqdm bars
callback method ordering
cannot reset when disabled
sequence -> list
default values
fix has no attr _time()
move on_val_end to proper place
fix the pickle issue
update warning
properties
check for None
remove old comment
switch order
pull out non-tqdm functionality into base class
documentation for the base class
docs
fix refresh rate issue in validation
restrict type hint of trainer arg
more docs
update trainer docs
rst docs
fix lines too long
fix test
add missing type hints
fix typo
move docstring to __init__ solves doctest failures
remove doctest :(( can't fix the pickle error
fix example
simplify by saving trainer reference
fix docs errors
move docstring
initial value
multiple val checks per epoch
simpler handling of inf dataset sizes
update inf docs
renamed training_tqdm_dict
rename get_tqdm_dict
rename occurences of tqdm
update changelog
fix doctest
fix formatting errors
added callback tests
progress bar on off test
more tests for progress bar
weird test fix?
add ignored property
disable default progress bar in LR finder
change enable/disable behavior
trying doctest in CI again
undo doctest pickle error
undo doctest pickle error :((
remove progress_bar_callback Trainer arg and fix tests
restore progress bar after auto lr find
update docs
fix rebase
fix wrong negation
* fix fast dev run total
* more thorough testing
* remove old args
* fix merge
* fix merge
* separate tests
* type hint total batches
* reduce if
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* is_disabled
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* is_enabled
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* rename enabled/disabled
* move deprecated api
* remove duplicated test from merge
* fix rename is_disabled
* newline
* test also testprogress for fast dev run
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* feat: save checkpoint before deleting old ones
* fix: make sure that the new model is not deleted
* changelog
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* model_checkpoint to save all models
* changelog
* rise if
Co-authored-by: jamesjjcondon <jamesjjcondon@gmail.com>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
* improved docs for callbacks
* class references
* make doctest pass
* doctests
* fix lines too long
* fix line too long
* fix permission error in doctest
* Apply suggestions from code review
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* fix doctest
* fix default
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* name formatting
* version
* testing
* add test
* fix test
* Update model_checkpoint.py
* doctests
* pylint
* tests
* debug
* debug
* enabled early stopping/checkpooiunt even without val step
* fix MNIST download (#1044)
* fix MNIST download
* simple
* name formatting
* version
* testing
* add test
* fix test
* doctests
* tests
* debug
* debug
* rebased 1041
* rebased 1041
* tests
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* consolidate callbacks and hooks
* ensure callbacks recieve proper arg types
* remove model from init callback events
* clean up early stopping event
* update changelog
* remove on_fit_start and on_fit_end
* fix args for on_init_start and on_init_end
* handle case where early stopping is not used
* show all callback methods
* wrap checkpoint callback logic into proper class
* fix check for main process in checkpoint callback
* move callbacks test to separate file
* refactor arg checks
* get model and call hook on same line
* define trainer_options dict in one call
* add more asserts to callback test
* Add callback system + associated test
* Add trainer and pl_module args to callback methods
* typing
* typo in docstring
* Switch to on_.*_start()
* fix on_test_start
* fix the mess after rebasing