* added warning when changing monitor and using results obj
* added warning when changing monitor and using results obj
* added warning when changing monitor and using results obj
* fix missing return statement. Do not normalize remote paths
* Update pytorch_lightning/utilities/cloud_io.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Add some documentation that we now support s3 and hdfs paths
* suggestion from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Update lr_logger.py
when logging learning_rate, we should provide different choices to log including 'step' and 'epoch'
* Update lr_logger.py
add some type annotations and docstrings
* Update lr_logger.py
fixed a bug where `on_train_batch_start()` can't be triggered, instead, we should use on_batch_start(); add `interval` args so that we can record learning_rates with respect to `global_step` or `current_epoch`.
* Update lr_logger.py
restore _extract_lr()
* suggestion
* Update lr_logger.py
modify _extract_lr(), it no more need to pass `interval` parameter.
* Update test_lr_logger.py
SkafteNicki 's suggetion
* log_interval now supports `None`, `step`, `epoch`
* change `log_interval` to `logging_interval`
* Update test_lr_logger.py
* Update lr_logger.py
* put types check into `on_train_start()`
* cleanup
* docstring typos
* minor changes from suggestions
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
commit 29fb0506cd38a15c359e369cc8bc4435916b0c78
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 19:35:30 2020 +0000
fix checking for version for docs to build
commit 467fd640db02275972c7111af031c86bb59333e9
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:56:05 2020 +0000
remove no local test
commit a7cc9f88de00feec1a5406874d05313c42bd004c
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:46:44 2020 +0000
fix
commit 3fdbb729da79ae9348c83410a138666bad467951
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:23:30 2020 +0000
revert requirements
commit 9b8686bd83e2bc243cf329e26f1c667c6949cf67
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:16:42 2020 +0000
make it a fixture
commit eec74953d24c8b25268d3b6dde3cc4affdd5cb8f
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 18:01:32 2020 +0000
fix up the testing
commit 896d94a0e60083d52c81db2a036b7f1e015cad11
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 17:47:28 2020 +0000
fix some tests
commit 6d22bde19767bf2b71dfd44839b01efdf6888f83
Merge: 6175d4e2 6ebe0d72
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Sat Aug 8 10:20:47 2020 +0000
Merge remote-tracking branch 'origin/master' into tb_use_gfile
commit 6175d4e26b15a43c412c26d501762cd0b570616a
Author: Brendan Fahy <bmfahy@gmail.com>
Date: Fri Aug 7 10:16:36 2020 +0000
Use tensorboard.compat.gfile to support remote writing
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* added tests
* added tests
* added tests
* update rtol
* Revert "update rtol"
This reverts commit 4320329540.
* added tests
Co-authored-by: William Falcon <waf2107@columbia.edu>
* fix weights_save path and drop ckpt_path
* add tests
* unused import
* update docs
* changelog
* pep8
* fix horovod test
* make backward compatible
* perform same test for all loggers
* fix for when logger=False and weights_save_path is set
* update changelog
* update docs
* update tests
* do not set save dir dynamically
* remove duplicate test
* remove duplicated tests
* update tests
* update tests
* remove remaining ckpt_path references
* move defaults to init as suggested by @Borda
* test deprecation
* r
* r
* r
* patched optimizer closure with sr
* patched optimizer closure with sr
* patched optimizer closure with sr
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added train step structured result
* added autoreduce for train step
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added auto reduce on train
* added hooks
* added hooks
* added hooks
* added hooks
* added hooks
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* cache
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
* Update pytorch_lightning/core/step_result.py
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* simple
* finished tests for structured results on train epoch
* simple
* simple
* revert
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update tests/base/deterministic_model.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* finished tests for structured results on train epoch
* docstring typos
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* finished tests for structured results on train epoch
* Update pytorch_lightning/core/step_result.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update pytorch_lightning/overrides/data_parallel.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* mlflow rework
* logger save_dir
* folder
* mlflow
* simplify
* fix test
* add a test for file dir contents
* new line
* changelog
* docs
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* test for comet logger
* improve mlflow checkpoint test
* prevent commet logger error on pytest exit
* test tensorboard save dir structure
* wandb save dir test
* skip test on windows
* add mlflow to pickle tests
* wandb
* code factor
* remove unused imports
* remove unused setter
* wandb mock
* wip mock
* wip mock
* wandb tests with mocking
* clean up
* clean up
* comments
* include wandblogger in test
* clean up
* missing argument
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* Fixes#2455
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* added early stop tpu test
* fix and test for ddp block logging rank > 0
* rename
* use the dummy logger
* dummy logger test
* set the logger in model
* decorator for rank zero experiment
* simplify check
* simplify
* fix problem with None in checkpoint path
* revert configure logger
* unused import
* offline
* try rank 0 decorator in checkpoint
* try fix test
* imgs
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* add asserts to make sure log zero only saves checkpoints
* fix tpu tests
* fix tpu tests
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Adding importing ipywidgets before importing tqdm.auto to make sure ipywidgets is installed.
* Updated CHANGELOG.md
* Updated ipywidgets importing checks to @awaelchli comments.
Co-authored-by: William Falcon <waf2107@columbia.edu>
* add state_dict for early stopping
* move best attr after monitor_op defined
* improve early stopping and model checkpoint callbacks
* fix formatting
* fix attr init order
* clean up setting of default_root_dir attr
* logger needs default root dir set first
* reorg trainer init
* remove direct references to checkpoint callback
* more fixes
* more bugfixes
* run callbacks at epoch end
* update tests to use on epoch end
* PR cleanup
* address failing tests
* refactor for homogeneity
* fix merge conflict
* separate tests
* tests for early stopping bug regressions
* small fixes
* revert model checkpoint change
* typo fix
* fix tests
* update train loop
* cannot pass an int as default_save_path
* refactor log message
* fix test case
* appease the linter
* fix some doctests
* move config to callback
* fixes from rebase
* fixes from rebase
* chlog
* docs
* reformat
* formatting
* fix
* fix
* fixes from rebase
* add new test for patience
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/callbacks/test_early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix formatting
* remove enable_early_stop attribute
* add state_dict for early stopping
* move best attr after monitor_op defined
* improve early stopping and model checkpoint callbacks
* fix formatting
* fix attr init order
* clean up setting of default_root_dir attr
* logger needs default root dir set first
* reorg trainer init
* remove direct references to checkpoint callback
* more fixes
* more bugfixes
* run callbacks at epoch end
* update tests to use on epoch end
* PR cleanup
* address failing tests
* refactor for homogeneity
* fix merge conflict
* separate tests
* tests for early stopping bug regressions
* small fixes
* revert model checkpoint change
* typo fix
* fix tests
* update train loop
* fix test case
* appease the linter
* fix some doctests
* move config to callback
* fixes from rebase
* fixes from rebase
* chlog
* docs
* reformat
* formatting
* fix
* fix
* fixes from rebase
* add new test for patience
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/callbacks/test_early_stopping.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix formatting
* remove enable_early_stop attribute
* fix test with new epoch indexing
* fix progress bar totals
* fix off by one error (see #2289) epoch starts at 0 now
* added missing imports
* fix hpc_save folderpath
* fix formatting
* fix tests
* small fixes from a rebase
* fix
* tmpdir
* tmpdir
* tmpdir
* wandb
* fix merge conflict
* add back evaluation after training
* test_resume_early_stopping_from_checkpoint TODO
* undo the horovod check
* update changelog
* remove a duplicate test from merge error
* try fix dp_resume test
* add the logger fix from master
* try remove default_root_dir
* try mocking numpy
* try import numpy in docs test
* fix wandb test
* pep 8 fix
* skip if no amp
* dont mock when doctesting
* install extra
* fix the resume ES test
* undo conf.py changes
* revert remove comet pickle from test
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update weights_loading.rst
* Update weights_loading.rst
* Update weights_loading.rst
* renamed flag
* renamed flag
* revert the None check in logger experiment name/version
* add the old comments
* _experiment
* test chckpointing on DDP
* skip the ddp test on windows
* cloudpickle
* renamed flag
* renamed flag
* parentheses for clarity
* apply suggestion max epochs
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <jtjordan@ncsu.edu>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Revert "deprecated: epoch indexing from 1 (#2206)"
This reverts commit f94b919b
* chlog
* grad index
* Apply suggestions from code review
* tests
* fix
* test
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* drop train_percent_check
* chlog
* deprecated
* deprecated
* deprecated
* tests
* tests
* Apply suggestions from code review
* tests
* hydra support
* tests
* hydra support
* hydra support
* hydra support
* tests
* typo
* typo
* Update test_dataloaders.py
* docs
* docs
* docs
* docs
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fixed percent check for val/test
* fixed percent check for val/test
* fixed percent check for val/test
* fixed percent check for val/test
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* overfit_pct now uses train loaders for val and test and does not shuffle
* add on fit_start on fit_end hooks
* add on fit_start on fit_end hooks
* add on fit_start on fit_end hooks
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Patch for issue 1815, which will allow EarlyStopping to work on precision=16
* Added a whitespace to the end of the line so CICD can rerun. No reason for the latest macos test to have been cancelled.
* Format.
* Add an additional attribute to ModelCheckpoint to keep track of the best model's path
Currently, only the best metric value is directly tracked. This new attribute will help in uses cases where the trained model needs to be used or tracked right after training.
* Add small description and usage example to docs
* Fix PEP8 issues
* Fix doctest example
* Fix expected output in doctest
* Apply suggestions from code review
* Show example as code block instead of doctest
* Apply suggestions from code review
* Update CHANGELOG.md
* Rename `ModelCheckpoint.best` to `ModelCheckpoint.best_model_score`
Also rename `ModelCheckpoint.best_model` (added in this PR) to `ModelCheckpoint.best_model_path`, for consistency, and `kth_best_model` to `kth_best_model_path`.
* Update pytorch_lightning/trainer/training_io.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Add warning when loading checkpoint from an old version
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fixed undesired behaviour due to dict.fromkeys
* a test for log length consistency
* runtime-warn if no schedulers are configured
* chlog
* move
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
* FixesPyTorchLightning/pytorch-lightning#490
`EarlyStopping` should check the metric of interest `on_validation_end` rather than `on_epoch_end`.
In a normal scenario, this does not cause a problem, but in combination with `check_val_every_n_epoch>1` in the `Trainer` it results in a warning or in a `RuntimeError` depending on `strict`.
* Highlighted that ES callback runs on val epochs in docstring
* Updated EarlyStopping in rst doc
* Update early_stopping.py
* Update early_stopping.rst
* Update early_stopping.rst
* Update early_stopping.rst
* Update early_stopping.rst
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update docs/source/early_stopping.rst
* fix doctest indentation warning
* Train loop calls early_stop.on_validation_end
* chlog
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
* saves model every epoch
* implement test for save_last
* Update CHANGELOG.md
* Update CHANGELOG.md
* changes test description
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>