deepsource-autofix[bot]
c3065c5ce9
Iterate dictionary directly ( #8155 )
...
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
2021-06-27 21:55:16 +02:00
Adrian Wälchli
55a90af7fc
`pytorch_lightning.loops` file structure: group by dataloader, epoch, and batch loop ( #8077 )
2021-06-24 23:40:46 +02:00
Carlos Mocholí
4d9b72b8a9
Nuke RPC ( #8101 )
2021-06-23 18:31:13 +00:00
Adrian Wälchli
fe48203111
restrict public interface of training loop ( #8024 )
...
* active optimizers
* check checkpoint callback
* epoch loop properties
* epoch loop methods
* training_batch_loop
* changelog
* update chlog
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* unused imports
* yapf
* backward
* fix missing string reference
* is_last_batch remains public
* remove dead code
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
2021-06-23 10:25:29 +00:00
Adrian Wälchli
a45ab00b30
Loop Refactor 5/N - Prediction Loop ( #7700 )
...
* integrate d180bb2
* Minor changes
* Refactor loop logic into logger connector
* Refactor test
* Tighter fx validator
* Add back split idx
* Typing
* update
* Conflict
* Fix tests
* resolve grad_norm
* update
* move to train loop
* Bye grad_norm_dict parameter
* Fix sync test
* update
* Fix bug when validation is run mid epoch
* fix grad_norm_dict test
* Fix fx_validator test
* fix grad_norm_dict test
* Fix order bug
* Detach tensors in test
* resolve some tests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove pdb
* resolve flake8
* Update test
* more tests
* Revert last thomas' changes
* resolve 1 test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Refactor context restoration
* integrate latest changes from logger connector refactor poc
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* integrate latest changes from logger connector refactor poc
* Minor changes
* update changelog
* Remove unused argument
* Update CHANGELOG
* Copy call_hook changes
* Docs
* Fix ref
* move to cpu
* Bad merge
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove pdb
* remove pdb
* Refactor to
* Avoid partial
* trigger ci
* Bad merge
* integrate latest logger connector changes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove grad norm dicts list
* Diff
* properties first
* Bad merge
* Reuse metrics_to_scalars
* Use active loop
* Move to device
* resolve test
* integrate latest changes from logger connector poc
* define union
* define union
* Update logger connector
* Update result
* Update imports
* Update after rename
* Refactor reduce_fx and op
* Fix test after rename
* mypy
* integrate latest logger connector refactor poc changes
* Fix test
* Refactor test
* Deprecate `self.log(sync_dist_op)` in favor of `self.log(reduce_fx)`
* Undo field
* add redundant return
* rename
rename files and classes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* rename
* Replace code
* Fix names and imports
* Remove metric_attribute
* imports
* loop hygiene
* yapf on loops
* protected new loop trigger
* rename NEW LOOP guard
* integrate latest logger connector changes
* integrate latest logger connector changes (eval loop)
* resolve todo dataloading reset
* re-add notebooks
* add missing init
* bad merge
* remove NEW_LOOP guard
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* flake8
* exclude coverage
coverage
* integrate #7917 , remove teardown from training loop
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update "accumulated_batches_reached" condition
based on if iter count was updated or not
* remove public loop properties
* make skip backward protected again
* typing base loop
* typing fit loop
* typing training_batch_loop
* typing evaluation loop
* typing prediction loop
* typing training epoch loop
* dataloader_loop
* evaluation_dataloader_loop
* prediction_dataloader_loop
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* integrate train loop changes from master
* integrate eval loop changes from master
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix tpipes moving model to cpu and leaving it there.
* don't reset fit loop
don't reset fit loop
* fix test iteration count <-> batch_idx reset
* replace torch.Tensor -> Tensor
* fix attribute error to block_ddp_sync_behaviour
* fix flake8 and yapf conflict
* remove redundant override
* add classes
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
Co-authored-by: Justus Schock <justus.schock@posteo.de>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* trainer changes
* connect
* clean up
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update test renaming
* rename evaluation loop to evaluation epoch loop
* minor docstring improvements
* update chlog
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* try ci fix
* update code owners for pl/loops
* update mock path
* re-order
* simplify dataloader reset
* simplify get_dataloaders()
* save predictions on_run_end()
* improve skip condition re-routing
* re-order
* remove unused type import
* check which assert is failing
* pig
* hobbit
* teardown for evaluation
* Revert "hobbit"
This reverts commit e81b0dbee3
.
* Revert "pig"
This reverts commit 33d89e0720
.
* Revert "check which assert is failing"
This reverts commit b7483b425c
.
* free memory in fit loop teardown
* update docstring
* period
* remove dead code
* else carlos
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/loops/dataloader/evaluation_dataloader_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* update chlog
* unused imp
* move default construction in run_evaluation
* add something for lawyer to read
* switch typehint for eval loop trainer property
* add missing imports
* remove a todo that needs more discussion
* combine _get_num_dataloaders with the property
* Update pytorch_lightning/loops/dataloader/dataloader_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* black + yapf
* avoid coverage on old unused eval loop
* empty space in docstring
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* resolve todo for args forwarding
* weekproxy trainer
* fix check for num dataloaders kwargs
* clean up num prediction dataloaders property
* free memory
* rm notebooks folder
* rm old file
* revert changes to old eval loop
* bad merge
* undo teardown
* setup signature
* remove file for notes
* free memory
* chlog
* Revert "weekproxy trainer"
This reverts commit d4e6969170
.
* connect trainer
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* clean up max batches and dataloaders
* max batches handling
* no grad handling
* unused argument
* protected attrs
* unused imports
* undo unintentional rename
* consistent naming
* capitalization in docstring
* list all args
* Update pytorch_lightning/loops/prediction_epoch_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/loops/prediction_epoch_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/loops/dataloader/prediction_dataloader_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/loops/dataloader/prediction_dataloader_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/loops/prediction_epoch_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: tchaton <thomas@grid.ai>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Justus Schock <justus.schock@posteo.de>
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
2021-06-23 10:17:04 +01:00
Théo Dumont
5d44e61efc
Add forgotten colon ( #8076 )
...
Very small typo correction: add forgotten `:` in finetuning callbacks docs.
2021-06-22 10:37:39 +00:00
Adrian Wälchli
4becd1cf31
rename old `Trainer.train_loop` -> `Trainer.fit_loop` ( #8025 )
2021-06-22 11:49:32 +02:00
Kaushik B
2303f9ced8
Fix(Early Stopping): move best score to device ( #7959 )
2021-06-21 15:41:41 +05:30
Kaushik B
f447839d16
Add `warning_cache.deprecation` and set warning stacklevel [1/2] ( #8005 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-06-18 11:50:24 +00:00
Austin Basye
906de2a7fa
[feat] Named Parameter Groups in `LearningRateMonitor` ( #7987 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-06-17 03:13:54 +02:00
Dan Dale
3a0ed02bd4
Properly handle parent modules w/ parameters in `BaseFinetuning` callback ( #7931 )
...
Co-authored-by: Daniel Dale <dan@distributedinsight.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-06-14 16:01:07 +00:00
Burhanuddin Rangwala
8b73869369
Deprecate the default `EarlyStopping` callback monitor value ( #7907 )
...
* removed monitor default value and added depreceation message
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* format change
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* requested changes
* added test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* format changes
* typehint change
* Update CHANGELOG.md
* requested changes
* regex
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-06-10 17:33:39 -07:00
Carlos Mocholí
580a3b5e32
Remove dead code ( #7910 )
2021-06-10 11:38:33 +01:00
Carlos Mocholí
5593b6f772
Merge pull request #7872 from PyTorchLightning/refactor/logger-poc-changes
...
Random fixes for logger connector PoC
2021-06-08 09:04:16 -04:00
thomas chaton
d1becce4c1
[bugfix] Resolve LearningRateMonitor + BackboneFinetuning ( #7835 )
...
* add test + resolve bug
* update changelog
* resolve bug
* resolve bug
* Update pytorch_lightning/callbacks/lr_monitor.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/callbacks/lr_monitor.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update on comments
* resolve comments
* update
* Update tests/callbacks/test_lr_monitor.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/callbacks/lr_monitor.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-06-07 10:17:11 +00:00
Adrian Wälchli
7e6010fc93
fix info message when max training time reached ( #7780 )
...
* call time_elapsed
* elapsed formatting
* format
* update test
* changelog
2021-05-31 14:50:16 +02:00
Boris Dayma
9097347ea8
feat(wandb): log models as artifacts ( #6231 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-05-27 20:15:02 +02:00
Carlos Mocholí
d26953c8bc
Add `ModelPruning(prune_on_train_epoch_end)` to choose when to apply pruning ( #7704 )
...
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-05-26 00:57:56 +02:00
Kaushik B
e7057d5898
Add `should_rank_save_checkpoint` property to Training Plugins ( #7684 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2021-05-25 23:02:05 +05:30
Carlos Mocholí
a1c40f3207
Remove on epoch guard from the should stop validation check ( #7701 )
...
* Remove on epoch guard from the should stop validation check
* Formatting
2021-05-25 15:59:42 +01:00
Gyeongjae Choi
a54bc5dba3
Fix progress bar print error when called before training ( #7674 )
...
* Check progress bar existence before printing
* Add tests for predict_progres_bar
* Add tests for progress_bar printing without training
* Update changelog
2021-05-24 17:33:28 +02:00
ananthsub
94ef17ce77
Update model_checkpoint.py ( #7625 )
2021-05-20 23:16:18 +02:00
Yifu Wang
ed271905cf
Clear predict_progress_bar in ProgressBar.__getstate__ ( #7608 )
...
Co-authored-by: Yifu Wang <yifuwang@2012@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-05-20 01:38:49 +00:00
ananthsub
8266b141ba
[feat] Support time-based checkpointing during training ( #7515 )
...
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-05-19 22:14:13 +00:00
Carlos Mocholí
dbea5bb710
Add typing to `ModelPruning` callback ( #7529 )
...
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-05-19 22:01:42 +02:00
Adrian Wälchli
502adbced3
refactor optimizer loop logic for manual and automatic optimization ( #7526 )
...
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
2021-05-17 14:42:01 +02:00
Kaushik B
bf46730d92
Support TPU Pod Training (n/n) ( #7296 )
2021-05-17 11:33:44 +00:00
Adrian Wälchli
ad9118f04a
remove trainer hidden state | sanity refactor [1 / n] ( #7437 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2021-05-11 11:09:08 +02:00
ananthsub
651f93a69f
Add documentation for ways to access all batch outputs for on_train_epoch_end hook ( #7389 )
...
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-05-05 22:18:45 +00:00
ananthsub
7b45bcfedb
[2/2] Remove outputs from evaluation epoch end hooks ( #7338 )
...
* Remove outputs from on_train_epoch_end
* iterate
* Update callback_hook.py
* update
* early stop?
* fix
* Update pytorch_lightning/trainer/training_loop.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* Update trainer.py
* update
* Update training_loop.py
* early stop?
* fix
* Remove outputs from evaluation epoch end hooks
* update
* Update test_remove_1-5.py
* fix lints
* Update base.py
* rm-outputs
* Update evaluation_loop.py
* try-save-more-memory
* Update trainer.py
* Update trainer.py
* cache-at-start
* Update evaluation_loop.py
* Update training_loop.py
* Update training_loop.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
2021-05-05 19:50:58 +00:00
ananthsub
6104a6316a
[1/2] Deprecate `outputs` in `on_train_epoch_end` hooks ( #7339 )
...
* Remove outputs from on_train_epoch_end
* iterate
* Update callback_hook.py
* update
* Update training_loop.py
* Update test_training_loop.py
* early stop?
* fix
* update tests
* Update test_hooks.py
* Update pytorch_lightning/trainer/callback_hook.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* Update pytorch_lightning/trainer/training_loop.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* Update trainer.py
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-05-05 17:18:16 +02:00
Carlos Mocholí
374ff750f5
Pass `current_epoch`/`global_step` as monitor candidates [1/2] ( #7344 )
...
* Pass `current_epoch`/`global_step` as monitor candidates
* Formatting
* Fix deprecated test
* Update CHANGELOG
2021-05-04 16:05:40 -04:00
Carlos Mocholí
8c0ea92af2
`TrainerState` refactor [5/5] ( #7173 )
...
* `TrainerState` refactor
* flake8
* Update finished check
* Test cleanup
* Fix tests
* Fixes
* Reorder
* flake8
* Update CHANGELOG
* Better docs
* Better docs
* Remove default
* Update tests
* Bad merge
2021-05-04 12:50:56 +02:00
Adrian Wälchli
bf1394a472
improve early stopping verbose logging ( #6811 )
2021-05-03 20:20:48 +00:00
ananthsub
14c552bb92
[bugfix] Fix dataloading for iterable datasets and limit_train_batches ( #7306 )
...
* bugfix-dataloading
* rm-logs
* Update CHANGELOG.md
* Update test_dataloaders.py
* Update test_dataloaders.py
* Update training_loop.py
* Update test_dataloaders.py
* Update CHANGELOG.md
* Update CHANGELOG.md
* Update test_dataloaders.py
* Update training_loop.py
* Update training_loop.py
* comments
* address comments
* more tests
* Update progress.py
* Update test_dataloaders.py
* Update test_dataloaders.py
* Update training_loop.py
* Update training_loop.py
* test ckpt fix?
* update again
2021-05-03 19:50:26 +01:00
thomas chaton
80b9ca0e38
[bugfix] Add reloading support using BaseFinetuning ( #7253 )
...
* update
* wip
* udpate
* update
* update
* update
* resolve bug
* update on comments
* update on comments
* update
* update
* formatting
* add comments
* update on comments
* update
* Update pytorch_lightning/callbacks/base.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* update
* update
* Typing and minor changes
* Refactor
* Fix deprecated test
* Broken commit
* Fix broken commit
* flake8
* Update CHANGELOG
* update on comments
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-04-30 11:14:43 -04:00
George
e272bea4dc
Updated `ModelCheckpoint` documentation ( #6873 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-04-28 23:56:58 +00:00
ananthsub
947d1cb757
[1/2] Add support for early stopping during training epoch end ( #6944 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: jirka <jirka.borovec@seznam.cz>
2021-04-28 15:18:56 +02:00
Ali Benkassou
cbc6e30b5d
Replace 'step' with 'global_step' ( #7244 )
2021-04-28 06:44:11 +00:00
thomas chaton
e76ebd640e
[feat] Add BasePredictionWriter 3/3 ( #7127 )
...
* wip
* update
* update
* update
* update
* update
* typo
* update on comments
* update
* update
* update
* update
* update changelog
* update
* Fix merge
* Fix merge
* move code
* resolve test
* add extra test
* add an extra test
* update on comments
* add typing
* resolve flake8
* Refactor and Docs
* Fix tests
* Fix tests
* Fix tests
* Duplicate
* Fix tests
* resolve bug
* update
* update on comments
* Update pytorch_lightning/utilities/imports.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/utilities/device_parser.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* update
* update
* update
* update on comments
* resolve flkae8
* update test
* Apply suggestions from code review
* update on comments
* Update pytorch_lightning/callbacks/prediction_writer.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* Update pytorch_lightning/callbacks/prediction_writer.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* Update pytorch_lightning/callbacks/prediction_writer.py
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
* update on comments
* update
* update on comment
* Apply suggestions from code review
* update
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-04-27 20:23:55 +00:00
ananthsub
dd5ec75e48
Deprecate save_function from model checkpoint callback ( #7201 )
...
* Update model_checkpoint.py
* Update CHANGELOG.md
* fix-tests
* deprecate not remove
* Update model_checkpoint.py
* Update test_remove_1-5.py
2021-04-26 17:55:26 +01:00
Carlos Mocholí
33066f8fd9
Add `on_predict_{batch,epoch}_{start,end}` and `Callback.on_predict_{start,end}` ( #7141 )
...
* Update hooks typing and predict hooks
* Update CHANGELOG
* Progress
* Progress
* Add back `on_predict_{start,end}`
* Typing and fix
* Update tests/trainer/logging_/test_logger_connector.py
* Update tests/callbacks/test_lambda_function.py
2021-04-22 10:05:28 -04:00
ananthsub
2f84459d26
Broadcast dirpath for tighter consistency in model checkpoint callback ( #6978 )
...
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
2021-04-21 10:20:27 -07:00
Adrian Wälchli
d12c6cf2b3
more early stopping options (convergence and divergence threshold) ( #6868 )
...
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-04-19 16:49:52 +02:00
Adrian Wälchli
67d21609c9
Add Trainer max_time argument + Callback ( #6823 )
...
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
2021-04-16 13:38:57 +02:00
Ethan Harris
f645df5e9a
Add typings for evaluation_loop.py and remove some dead code ( #7015 )
2021-04-15 07:36:04 +00:00
Carlos Mocholí
15926b462c
Add SWA warning if not running every epoch ( #6987 )
...
* Add SWA warning if not running every epoch
* Typo
2021-04-13 18:34:40 +02:00
ananthsub
2e53fd3332
[fix] [easy] Update Model Checkpoint callback overrides to use base Callback signature ( #6908 )
...
* Update model_checkpoint.py
2021-04-09 13:24:59 +02:00
scart97
eb15abcd82
Fix finetuning complex models correctly unfreezes. ( #6880 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-04-08 12:59:06 +05:30
Michael Baumgartner
6dc1078822
Enforce an epoch scheduler interval when using SWA ( #6588 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-04-06 02:57:33 +00:00
Sadiq Jaffer
7f91c5ebbc
Fix `unfreeze_and_add_param_group` expects `modules` rather than `module` ( #6822 )
2021-04-06 01:50:42 +02:00
Karthik Prasad
c3da7f50bb
Sanitize `None` params during pruning ( #6836 )
...
* sanitize none params during pruning
* amend
2021-04-06 01:47:59 +02:00
Yuan-Hang Zhang
1bd5f36a5b
Fix validation progress counter with check_val_every_n_epoch > 1 ( #5952 )
...
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-04-02 17:40:41 +09:00
Kaushik B
f79a13e495
[Model Parallel] Add configure sharded model hook ( #6679 )
...
* Add base hook for model parallel
* fix callback signature
* Simplify hook
* Add hook logic
* add tests
* add property setter
* add logic for being called once
* Update changelog
* Fix
* fix return type
* fix lambda callback test
* Fix tests
* Apply code suggestions
* add logic for setup_optimizers_predispatch
* add common dummy model
* Swap call order
* Remove test that isn't needed anymore
* Update tests
* Add a bit more doc
* Few code review fixes
* Update pytorch_lightning/accelerators/accelerator.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Change hook name
* Fix test
* Test setup hook, refactor names
* Swap call order of callbacks and model initialization
* Change name of context manager
Co-authored-by: SeanNaren <sean@grid.ai>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-03-29 14:50:51 -06:00
Carlos Mocholí
f0c5479de9
Remove legacy `Result` parameters ( #6016 )
2021-03-28 11:55:08 +02:00
Carlos Mocholí
bc613611e2
Do not add return dict items to callback_metrics ( #6682 )
2021-03-26 14:05:20 +01:00
Jirka Borovec
217c12a4e7
Simplify deprecations ( #6620 )
...
* use external deprecate
* simplify
* simplify
* simplify
* flake8
* .
* others
* .
2021-03-25 15:26:38 +01:00
Rohit Gupta
9be092dbdb
Add on_epoch_start to run at the beginning of every loop irrespective of train/val/test ( #6498 )
...
* update docs
* add hook and update docs
* update tests
* chlog
* Update CHANGELOG.md
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* chlog
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-03-25 14:20:49 +01:00
Jirka Borovec
1fae10a2dc
refactoring setup ( #6590 )
...
* refactoring setup
* .
* docs
* flake8
2021-03-22 08:39:19 -04:00
Kaushik B
b190403e28
Add outputs param for `on_val/test_epoch_end` hooks ( #6120 )
...
* add outputs param for on_val/test_epoch_end hooks
* update changelog
* fix warning message
* add custom call hook
* cache logged metrics
* add args to docstrings
* use warning cache
* add utility method for param in sig check
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update docstring
* add test for eval epoch end hook
* add types and replace model ref
* add deprecation test
* fix test fx name
* add model hooks warning
* add old signature model to tests
* add clear warning cache
* sopport args param
* update tests
* add tests for model hooks
* code suggestions
* add signature utils
* fix pep8 issues
* fix pep8 issues
* fix outputs issue
* fix tests
* code fixes
* fix validate test
* test
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-03-16 12:15:16 -04:00
thomas chaton
0544efd453
[bug] Update broadcast + reduce decision ModelCheckpoint] ( #6410 )
...
* resolve bug
* update
* update changelog
* update PR
* Update pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* add todo
* resolve issues
* resolve flake8
* update
* add coverage for reduce
* wip
* restore back to brodbact
* remove test.py
* resolve flake8
* update
* check world size
* resolve test
* update
* use pytorch version when defined
* update on comments
* update on comments
* flake8
* resolve bugs
* Update CHANGELOG.md
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* update
* update
* update
* update
* remove test
* update
* resolve flake8
* update
* update
* update
* proxy
* update
* update
* resolve typo
* prune
* update parallel
* update
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-03-14 17:14:27 +00:00
ananthsub
cea170e011
[feat] Support iteration-based checkpointing in model checkpoint callback ( #6146 )
...
* Update model_checkpoint.py
* add tests
* Update model_checkpoint.py
* Update test_model_checkpoint.py
* fix tests
* every_n_batches
* Update test_model_checkpoint.py
* defaults
* rm tests
* Update model_checkpoint.py
* Update test_model_checkpoint.py
* Prune deprecated metrics for 1.3 (#6161 )
* prune deprecated metrics for 1.3
* isort / yapf
* Update model_checkpoint.py
* add tests
* defaults
* Update CHANGELOG.md
* pre-commit
* Update model_checkpoint.py
* update defaults
* Update test_remove_1-5.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* fix tests
* Update test_model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update test_model_checkpoint.py
* ckpt-callback
* Update test_model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* validation-end
* Update model_checkpoint.py
* Update test_model_checkpoint.py
* Update test_model_checkpoint.py
* Update test_model_checkpoint.py
* Update test_model_checkpoint.py
* clarify-names
- Make names explicit as to which hooks they apply to
- Use step instead of batch for consistency with global step
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* Update model_checkpoint.py
* mutual-exclusive
Make every_n_train_steps and every_n_val_epochs mutually exclusive
* fix-default-0
* Update CHANGELOG.md
* formatting
* make-private
make attributes private to the class
* rebase
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-03-11 14:44:29 -08:00
Max Frei
2ecda5df52
Allow user to disable the automatic formatting of checkpoint file names. ( #6277 )
...
* cleaning SWA (#6259 )
* rename
* if
* test
* chlog
* Remove opt from manual_backward in docs (#6267 )
* switch agents pool (#6270 )
* Allow user to disable the automatic formatting of checkpoint file names.
* Added changelog entry.
* Made flake8 happy.
* Applied review suggestion: quotes for special characters in docstring
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Fixed example in docstring.
* Fixed syntax error in docstring.
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
Co-authored-by: thomas chaton <thomas@grid.ai>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-03-11 16:40:23 +08:00
Elia Cereda
f4cc7451a9
Add Trainer.validate(…) method to run one validation epoch ( #4948 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-03-11 03:46:37 +01:00
Carlos Mocholí
efd272a3ca
Pass {fit,validate,test,predict} to setup() and teardown() ( #6386 )
2021-03-08 15:27:07 +01:00
Carlos Mocholí
826375effe
Fix ModelCheckpoint(monitor=None, save_last=True) not saving checkpoints ( #6136 )
...
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
2021-03-08 00:59:14 +01:00
Elia Cereda
d0596fac94
Refactor RunningStage usage in advance of implementing Trainer.validate() ( #4945 )
...
* Update code
Co-authored-by: EliaCereda
* More property updates
* Move properties. Introduce trainer._fitting
* Use trainer.fitting
* Fix reset dataloaders
* Unused code
* RunningStage.SANITY_CHECKING
* Use setters
* Fix bugs
* Fix bugs
* TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}
* Fix bugs
* Fix bugs
* Fix tests
* Update CHANGELOG. Add deprecation warning. Fix tests
* Unused imports
* Optional trainer
* More deprecation. More refactoring
* Correct version
* Use properties
* Address comments
* flake8
* Missed renamings
* Typo
* is -> ==
It is recommended to use for Enums since they are singletons, however, since the LightningEnum subclasses str, it's not a good idea in case a user sets the state/stage with a str
* Also for tests
* Typo
* Address @tchaton's comments
* PEP8
* Correct property
* Update CHANGELOG
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Remove called sanity check
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-03-06 12:40:19 +00:00
Jirka Borovec
e84854264f
CI: fix examples - patch download MNIST ( #6357 )
...
* patch download
* CI
* isort
* extra
2021-03-05 16:50:21 +00:00
Jirka Borovec
6166f46281
drop unused variable in API ( #6308 )
...
* drop unused pl model in ckpt
* irelevant
* on_evaluation_batch_start
* evaluation_epoch_end
* attach_datamodule
2021-03-04 10:26:54 +01:00
Carlos Mocholí
4a8422c2dc
Fix ModelPruning(make_pruning_permanent=True) buffers getting removed when saved during training ( #6073 )
...
Co-authored-by: chaton <thomas@grid.ai>
2021-03-03 13:29:58 +01:00
Adrian Wälchli
bc577ca792
fix duplicate console logging bug v2 ( #6275 )
...
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-03-02 15:17:55 +05:30
Joseph Turian
22985d2f43
Improved EarlyStopping.patience documentation ( #6278 )
...
* Improved early stopping documentation
* Changed to 120 column format
* doc
* doc
* doc
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
2021-03-02 15:01:07 +05:30
Jirka Borovec
ed67490d93
cleaning SWA ( #6259 )
...
* rename
* if
* test
* chlog
2021-03-01 19:10:27 +01:00
Carlos Mocholí
3df02b880a
Add checkpoint parameter to on_save_checkpoint ( #6072 )
...
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
2021-02-25 21:18:19 +05:30
Sean Naren
dd2f5a0212
Fix for multiple callbacks ( #6197 )
...
* Fix for multiple callbacks
* Add CHANGELOG.md
* Remove old params
* Skip tests on windows using ddp
* Change name of the variable to not clash with should stop, which is separate
* Apply suggestions from code review
* Fix params
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-02-25 15:44:55 +00:00
Carlos Mocholí
8b475278dd
Prune deprecated EarlyStopping(mode='auto') ( #6167 )
...
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2021-02-24 13:26:33 +00:00
Jirka Borovec
46617d9021
Prune deprecated checkpoint arguments ( #6162 )
...
* prune prefix
* prune mode=auto
* chlog
2021-02-24 06:58:53 -05:00
Alexander
423ecf995a
Feature/5275 clean progress bar print ( #5470 )
...
* Trainer.test should return only test metrics (#5214 )
* resolve bug
* merge tests
* Fix metric state reset (#5273 )
* Fix metric state reset
* Fix test
* Improve formatting
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
* print() method added to ProgressBar
* printing alongside progress bar added to LightningModule.print()
* LightningModule.print() method documentation updated
* ProgressBarBase.print() stub added
* stub
* add progress bar tests
* fix isort
* Progress Callback fixes
* test_metric.py duplicate DummyList removed
* PEP and isort fixes
* CHANGELOG updated
* test_progress_bar_print win linesep fix
* test_progress_bar.py remove whitespaces
* Update CHANGELOG.md
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Tadej Svetina <tadej.svetina@gmail.com>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
Co-authored-by: Alexander Snorkin <Alexander.Snorkin@acronis.com>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-02-22 09:40:18 +00:00
Carlos Mocholí
57215b79a0
Avoid printing ModelCheckpoint log with monitor=None and verbose=True ( #6109 )
2021-02-22 08:51:13 +00:00
edenlightning
3449e2d79f
Docs for Pruning, Quantization, and SWA ( #6041 )
...
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2021-02-18 13:51:51 +00:00
Jirka Borovec
049006a59c
fix/test quant ( #6040 )
...
* fix/test quant
* ...
* ---
2021-02-18 10:47:29 +00:00
Carlos Mocholí
38ad9e0764
[ModelPruning] Add missing attribute with use_global_unstructured=False and verbose ( #6045 )
2021-02-18 10:40:34 +00:00
chaton
c9622bafe0
[feat] Add Trainer(stochastic_weight_avg=True/False) ( #6038 )
...
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-02-17 23:10:05 +00:00
Nicki Skafte
68fd3086f1
Prevent flickering progress bar ( #6009 )
...
* add padding
* fix
* fix
* Update pytorch_lightning/callbacks/progress.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* updated based on suggestion
* changelog
* add test
* fix pep8
* resolve test
* fix code format
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: tchaton <thomas@grid.ai>
2021-02-17 19:01:51 +00:00
chaton
6bc4490d01
[HotFix] Resolve TPU Training ( #6027 )
...
* fix tpus
* update
* add back reduction in val_loss
* resolve some bugs with TPUs
* update changelog
* update on comments
* forgot status
* Fix train_bn arg
* resolve comments
* update on comments
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-02-17 16:40:13 +00:00
Carlos Mocholí
7aae589167
Add deprecation warning to ModelCheckpoint when logging val_loss with no monitor ( #6012 )
...
* Add deprecation warning when logging val_loss with no monitor
* EOF
* Update CHANGELOG
* Clear warning cache before testing
* pep8
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
2021-02-17 10:46:58 +00:00
chaton
e982800b81
Add PredictLoop ( #5752 )
...
* integrate distrib_type
* sync changes
* sync
* fixes
* add forgotten generators
* add missing logic
* update
* import
* missed imports
* import fixes
* isort
* mv f
* changelog
* format
* move helper to parallel plugin
* d
* add world size
* clean up
* duplicate
* activate ddp_sharded and tpu
* set nvidia flags
* remove unused colab var
* use_tpu <-> on_tpu attrs
* make some ddp_cpu and clusterplugin tests pass
* Ref/accelerator connector (#5742 )
* final cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* connector cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* trainer cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* accelerator cleanup + missing logic in accelerator connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add missing changes to callbacks
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* reflect accelerator changes to lightning module
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* clean cluster envs
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* cleanup plugins
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add broadcasting
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* yapf
* remove plugin connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* plugins
* add predict_loop
* manual optimization
* clean predictloop
* update optimizer routing
* add predict loop on new accelerator
* resolve a bug
* add rank to torchelastic
* add predict_loop
* add predict loop on new accelerator
* resolve a bug
* fix memory mixed precision
* update
* setstate on trainer for pickling in ddp spawn
* add predict_loop
* clean predictloop
* add predict loop on new accelerator
* resolve a bug
* add predict_loop
* add predict loop on new accelerator
* resolve a bug
* add predict_loop
* add predict loop on new accelerator
* resolve a bug
* add predict_loop
* add predict loop on new accelerator
* resolve a bug
* add predict_loop
* clean predictloop
* add predict loop on new accelerator
* resolve a bug
* add predict_loop
* add predict loop on new accelerator
* resolve a bug
* resolve tests
* add predict method
* add back commented accelerator code
* adapt test for sync_batch_norm to new plugin
* fix deprecated tests
* fix ddp cpu choice when no num_processes are given
* yapf format
* skip a memory test that cannot pass anymore
* remove sanetize
* rename train to run_train
* remove useless hooks
* add misconfigurationException
* remove wrong naming
* resolve some legacy
* udpate docstring
* fix pickle error in spawn plugin
* x
* avoid
* x
* fix cyclic import in docs build
* add support for sharded
* update typing
* add sharded and sharded_spawn to distributed types
* make unwrap model default
* refactor LightningShardedDataParallel similar to LightningDistributedDataParallel
* update sharded spawn to reflect changes
* update sharded to reflect changes
* Merge 1.1.5 changes
* fix merge
* fix merge
* yapf isort
* fix merge
* yapf isort
* fix indentation in test
* copy over reinit scheduler implementation from dev1.2
* fix apex tracking calls with dev_debugger
* reduce diff to dev1.2, clean up
* fix trainer config test when gpus>0 and num_processes >0 and ddp_cpu
* sort plugin tests legacy/new
* fix error handling for amp on cpu
* fix merge
fix merge
fix merge
* [Feat] Resolve manual_backward (#5837 )
* resolve manual_backward
* resolve flake8
* update
* resolve for ddp_spawn
* resolve flake8
* resolve flake8
* resolve flake8
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* fix tests/accelerator tests on cpu
* [BugFix] Resolve manual optimization (#5852 )
* resolve manual_optimization
* update
* update
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* Remove copy trainer parameters to happen earlier within the loop and add safe guard to get ref model (#5856 )
* resovle a bug
* Accelerator refactor sharded rpc (#5854 )
* rpc branch
* merge
* update handling of rpc
* make devices etc. Optional in RPC
* set devices etc. later if necessary
* remove devices from sequential
* make devices optional in rpc
* fix import
* uncomment everything
* fix cluster selection
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* resolve bug
* fix assert in rpc test
* resolve a test
* fix docs compilation
* accelerator refactor - fix for sharded parity test (#5866 )
* fix memory issue with ddp_spawn
* x
x
x
x
x
x
x
x
x
* x
* Remove DDP2 as this does not apply
* Add missing pre optimizer hook to ensure lambda closure is called
* fix apex docstring
* [accelerator][BugFix] Resolve some test for 1 gpu (#5863 )
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* update
* resolve flake8
* update
* update
* update
* update
* update
* all_gather
* update
* make plugins work, add misconfig for RPC
* update
* update
* remove breaking test
* resolve some tests
* resolve flake8
* revert to ddp_spawn
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
* yapf isort
* resolve flake8
* fix apex doctests
* fix apex doctests 2
* resolve docs
* update drone
* clean env
* update
* update
* update
* update
* merge
* Fix RPC related tests, clean out old API, update for new accelerator API [skip ci] (#5881 )
* Fix RPC related tests, clean out old API, update for new accelerator API
* Move tests out of legacy folder, update paths and names
* Update test_remove_1-4.py
* Expose properties for tpu cores/gpus/num_gpus
* Add root GPU property
* Move properties to properties.py
* move tests that were previously in drone
* Fix root GPU property (#5908 )
* Move root GPU to property, remove horovod set as this is handled in horovod plugin, ensure we mock correctly to set GPU accelerator
* Add missing tests back
* fix best model path transfer when no checkpoint callback available
* Fix setup hook order [wip] (#5858 )
* Call trainer setup hook before accelerator setup
* Add test case
* add new test
* typo
* fix callback order in test
Co-authored-by: tchaton <thomas@grid.ai>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* rename ddp sequential -> rpc sequential for special test
* revert
* fix stupid merge problem
* Use property in connector for sampler (#5913 )
* merge the import conflicts
* fix spawning of processes in slurm
* [wip] Fix some bugs for TPU [skip ci] (#5878 )
* fixed for single tpu
* fixed spawn
* fixed spawn
* update
* update
* wip
* resolve bugs
* resolve bug
* update on comment
* removed decorator
* resolve comments
* set to 4
* update
* update
* need cleaning
* update
* update
* update
* resolve flake8
* resolve bugs
* exclude broadcast
* resolve bugs
* change test
* update
* update
* skip if meet fails
* properly raise trace
* update
* add catch
* wrap test
* resolve typo
* update
* typo
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
* resolve some tests
* update
* fix imports
* update
* resolve flake8
* update azure pipeline
* skip a sharded test on cpu that requires a gpu
* resolve tpus
* resolve bug
* resolve flake8
* update
* updat utils
* revert permission change on files
* suggestions from carlos
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting changes
* remove incomplete comment
* Update pytorch_lightning/accelerators/__init__.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting change
* add types
* warn 1.7 ddp manual backward only if ddp kwarg unset
* yapf + isort
* pep8 unused imports
* fix cyclic import in docs
* Apply suggestions from code review
* typer in accelerator.py
* typo
* resolve flake8
* update code
* update
* Update pytorch_lightning/trainer/predict_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/trainer/predict_loop.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fix merge
* fix merge
* reset legacy accelerator
* add missing rename dispatch
* rename post traning
* update code
* resolved comments
* typo
* typo
* add flow description
* resolve comments
* update on comments
* update flow
* add backticks
* resolve tpu
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: justusschock <justus.schock@posteo.de>
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: SeanNaren <sean@grid.ai>
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-02-16 17:11:56 -05:00
Akihiro Nitta
0a2fb05aac
Document exceptions in callbacks ( #5541 )
...
* Add Raises: section to docstring
* Add Raises section to the docs
* Add raises section to the docs
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix
* Remove unnecessary instance check
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-02-15 10:24:36 +00:00
Justus Schock
da6dbc8d1d
PoC: Accelerator refactor ( #5743 )
...
* restoring the result from subprocess
* fix queue.get() order for results
* add missing "block_backward_sync" context manager
* add missing "block_backward_sync" context manager
* fix sync_batchnorm
* fix supported gpu-ids for tuple
* fix clip gradients and inf recursion
* accelerator selection: added cluster_environment plugin
* fix torchelastic test
* fix reduce early stopping decision for DDP
* fix tests: callbacks, conversion to lightning optimizer
* fix lightning optimizer does not pickle
* fix setting benchmark and deterministic option
* fix slurm amp test
* fix prepare_data test and determine node_rank
* fix retrieving last path when testing
* remove obsolete plugin argument
* fix test: test_trainer_config
* fix torchscript tests
* fix trainer.model access
* move properties
* fix test_transfer_batch_hook
* fix auto_select_gpus
* fix omegaconf test
* fix test that needs to simulate slurm ddp
* add horovod plugin
* fix test with named arguments
* clean up whitespace
* fix datamodules test
* remove old accelerators
* fix naming
* move old plugins
* move to plugins
* create precision subpackage
* create training_type subpackage
* fix all new import errors
* fix wrong arguments order passed to test
* fix LR finder
* Added sharded training type and amp plugin
* Move clip grad to precision plugin
* Added sharded spawn, select accelerators based on distributed_backend + enable custom fp16 plugin automatically
* Fix import issue, attempting to fix tests
* Fix initial test
* Reflect hook logic from master, should wrap model after move to device
* Optional state consolidation, since master has optimizers not wrapped
* change attribute for instance test
* reset optimizers
optimizers are not used in main process, so state would be wrong.
* legacy
* imports in accel
* legacy2
* trainer imports
* fix import errors after rebase
* move hook to new setup location
* provide unwrapping logic
* fix trainer callback system
* added ddp2 implementation
* fix imports .legacy
* move plugins
* restore legacy
* drop test.py from root
* add tpu accelerator and plugins
* fixes
* fix lightning optimizer merge
* reset bugreportmodel
* unwrapping
* step routing forward
* model access
* unwrap
* opt
* integrate distrib_type
* sync changes
* sync
* fixes
* add forgotten generators
* add missing logic
* update
* import
* missed imports
* import fixes
* isort
* mv f
* changelog
* format
* move helper to parallel plugin
* d
* add world size
* clean up
* duplicate
* activate ddp_sharded and tpu
* set nvidia flags
* remove unused colab var
* use_tpu <-> on_tpu attrs
* make some ddp_cpu and clusterplugin tests pass
* Ref/accelerator connector (#5742 )
* final cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* connector cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* trainer cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* accelerator cleanup + missing logic in accelerator connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add missing changes to callbacks
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* reflect accelerator changes to lightning module
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* clean cluster envs
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* cleanup plugins
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add broadcasting
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* yapf
* remove plugin connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* plugins
* manual optimization
* update optimizer routing
* add rank to torchelastic
* fix memory mixed precision
* setstate on trainer for pickling in ddp spawn
* add predict method
* add back commented accelerator code
* adapt test for sync_batch_norm to new plugin
* fix deprecated tests
* fix ddp cpu choice when no num_processes are given
* yapf format
* skip a memory test that cannot pass anymore
* fix pickle error in spawn plugin
* x
* avoid
* x
* fix cyclic import in docs build
* add support for sharded
* update typing
* add sharded and sharded_spawn to distributed types
* make unwrap model default
* refactor LightningShardedDataParallel similar to LightningDistributedDataParallel
* update sharded spawn to reflect changes
* update sharded to reflect changes
* Merge 1.1.5 changes
* fix merge
* fix merge
* yapf isort
* fix merge
* yapf isort
* fix indentation in test
* copy over reinit scheduler implementation from dev1.2
* fix apex tracking calls with dev_debugger
* reduce diff to dev1.2, clean up
* fix trainer config test when gpus>0 and num_processes >0 and ddp_cpu
* sort plugin tests legacy/new
* fix error handling for amp on cpu
* fix merge
fix merge
fix merge
* [Feat] Resolve manual_backward (#5837 )
* resolve manual_backward
* resolve flake8
* update
* resolve for ddp_spawn
* resolve flake8
* resolve flake8
* resolve flake8
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* fix tests/accelerator tests on cpu
* [BugFix] Resolve manual optimization (#5852 )
* resolve manual_optimization
* update
* update
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* Remove copy trainer parameters to happen earlier within the loop and add safe guard to get ref model (#5856 )
* resovle a bug
* Accelerator refactor sharded rpc (#5854 )
* rpc branch
* merge
* update handling of rpc
* make devices etc. Optional in RPC
* set devices etc. later if necessary
* remove devices from sequential
* make devices optional in rpc
* fix import
* uncomment everything
* fix cluster selection
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* resolve bug
* fix assert in rpc test
* resolve a test
* fix docs compilation
* accelerator refactor - fix for sharded parity test (#5866 )
* fix memory issue with ddp_spawn
* x
x
x
x
x
x
x
x
x
* x
* Remove DDP2 as this does not apply
* Add missing pre optimizer hook to ensure lambda closure is called
* fix apex docstring
* [accelerator][BugFix] Resolve some test for 1 gpu (#5863 )
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* update
* resolve flake8
* update
* update
* update
* update
* update
* all_gather
* update
* make plugins work, add misconfig for RPC
* update
* update
* remove breaking test
* resolve some tests
* resolve flake8
* revert to ddp_spawn
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
* yapf isort
* resolve flake8
* fix apex doctests
* fix apex doctests 2
* resolve docs
* update drone
* clean env
* update
* update
* update
* update
* merge
* Fix RPC related tests, clean out old API, update for new accelerator API [skip ci] (#5881 )
* Fix RPC related tests, clean out old API, update for new accelerator API
* Move tests out of legacy folder, update paths and names
* Update test_remove_1-4.py
* Expose properties for tpu cores/gpus/num_gpus
* Add root GPU property
* Move properties to properties.py
* move tests that were previously in drone
* Fix root GPU property (#5908 )
* Move root GPU to property, remove horovod set as this is handled in horovod plugin, ensure we mock correctly to set GPU accelerator
* Add missing tests back
* fix best model path transfer when no checkpoint callback available
* Fix setup hook order [wip] (#5858 )
* Call trainer setup hook before accelerator setup
* Add test case
* add new test
* typo
* fix callback order in test
Co-authored-by: tchaton <thomas@grid.ai>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* rename ddp sequential -> rpc sequential for special test
* revert
* fix stupid merge problem
* Use property in connector for sampler (#5913 )
* merge the import conflicts
* fix spawning of processes in slurm
* [wip] Fix some bugs for TPU [skip ci] (#5878 )
* fixed for single tpu
* fixed spawn
* fixed spawn
* update
* update
* wip
* resolve bugs
* resolve bug
* update on comment
* removed decorator
* resolve comments
* set to 4
* update
* update
* need cleaning
* update
* update
* update
* resolve flake8
* resolve bugs
* exclude broadcast
* resolve bugs
* change test
* update
* update
* skip if meet fails
* properly raise trace
* update
* add catch
* wrap test
* resolve typo
* update
* typo
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
* resolve some tests
* update
* fix imports
* update
* resolve flake8
* update azure pipeline
* skip a sharded test on cpu that requires a gpu
* resolve tpus
* resolve bug
* resolve flake8
* update
* updat utils
* revert permission change on files
* suggestions from carlos
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting changes
* remove incomplete comment
* Update pytorch_lightning/accelerators/__init__.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting change
* add types
* warn 1.7 ddp manual backward only if ddp kwarg unset
* yapf + isort
* pep8 unused imports
* fix cyclic import in docs
* Apply suggestions from code review
* typer in accelerator.py
* typo
* Apply suggestions from code review
* formatting
* update on comments
* update typo
* Update pytorch_lightning/trainer/properties.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* update
* suggestion from code review
* suggestion from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: SeanNaren <sean@grid.ai>
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-02-12 15:48:56 -05:00
Adrian Wälchli
4bdf2fe55f
remove executable bit on source files ( #5929 )
...
* 644
2021-02-12 00:06:40 +01:00
Jirka Borovec
e676ff96b1
Typing: callback base ( #5919 )
...
* typing for callback base
2021-02-11 14:33:10 +00:00
Jirka Borovec
b434c479e7
Quantisation ( #5706 )
...
* empty
* sq
* obs
* int
* ts
* helpers
* chlog
* yapf
* avg
* dupl
* Apply suggestions from code review
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fixes
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fixes
* note
* warn
* 45
* link
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* yapf
* flake8
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-02-11 07:04:57 -05:00
Jirka Borovec
9475c845cb
Docs/fixes ( #5914 )
...
* wip
* ..
* ...
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
2021-02-11 10:22:07 +00:00
chaton
7b00894130
[feat] Add StochasticWeightAveragingCallback ( #5640 )
...
* add swa callback
* switch back to 1.6.0
* remove optimizer_step
* move super
* update
* forgot update_parameters
* update on comments
* works for ddp
* resolve flake8
* remove set_model
* resolve flake8
* resolve cpu
* resolve flake8
* resolve flake8
* update
* update on comments
2021-02-11 00:05:59 +00:00
Carlos Mocholí
a028171f26
Fix Pruning callback and add a few features ( #5825 )
...
* Remove pruning check because it was added in 1.4.0 and that is our minimal torch version
* Fixing many bugs
* Fix misconfig test
* Fix tests
* Improve error message
* Reduce whitespace
* WIP
* TODOs
* _MODULE_CONTAINERS
* Add LTH test
* Allow resampling
* Iterative pruning
* Log pruning percentage
* Properly make pruning permanent
* Fix docstring
* Minor changes
* Test loading non-permanent model
* corrent bugs
* Revert "corrent bugs"
This reverts commit ffb8d47547
.
* Add beta warning
* Fix docs
* 2 verbosity levels
* OCD
Co-authored-by: Your Name <you@example.com>
2021-02-10 15:03:23 +00:00
Jirka Borovec
79d42d83e7
formatting 3/n: PL modules ( #5716 )
...
* cb
* log
* prof
* tune
* flake8
2021-02-08 14:28:38 -05:00
Rohit Gupta
cb67e1d0b2
Separate epoch validation from step validation ( #5208 )
...
* Seperate epoch validaton from step validation
* update system
* test
* baked logic in callbacks
* unbake logic in callbacks
* fix the call for scheduler
* use property
* pep
* correct rebase
* gitignore
* ref
* add tests
* fix
* add early stopping test
* trigger
* chlog
* rev
* 1.3
* log
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/trainer/training_loop.py
* Update CHANGELOG.md
* Apply suggestions from code review
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
(cherry picked from commit e429f97b67
)
2021-02-08 20:22:39 +01:00
Rohit Gupta
2abf4693bc
Fix log_dir property ( #5537 )
...
* fix and update tests
* update with ModelCheckpoint
* chlog
* wip wandb fix
* all fixed
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-02-05 21:40:42 +01:00
tchaton
df5cbf5368
resolve conflict
...
resolve failing test
2021-02-05 21:40:39 +01:00
Adrian Wälchli
bb7d188318
Fix ModelCheckpoint race condition in file existence check ( #5155 )
...
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
2021-02-05 21:40:39 +01:00