Elia Cereda
d0596fac94
Refactor RunningStage usage in advance of implementing Trainer.validate() ( #4945 )
...
* Update code
Co-authored-by: EliaCereda
* More property updates
* Move properties. Introduce trainer._fitting
* Use trainer.fitting
* Fix reset dataloaders
* Unused code
* RunningStage.SANITY_CHECKING
* Use setters
* Fix bugs
* Fix bugs
* TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}
* Fix bugs
* Fix bugs
* Fix tests
* Update CHANGELOG. Add deprecation warning. Fix tests
* Unused imports
* Optional trainer
* More deprecation. More refactoring
* Correct version
* Use properties
* Address comments
* flake8
* Missed renamings
* Typo
* is -> ==
It is recommended to use for Enums since they are singletons, however, since the LightningEnum subclasses str, it's not a good idea in case a user sets the state/stage with a str
* Also for tests
* Typo
* Address @tchaton's comments
* PEP8
* Correct property
* Update CHANGELOG
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Remove called sanity check
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-03-06 12:40:19 +00:00
Nicki Skafte
1b498d1f14
[Bugfix] Fixed epoch level schedulers not being called when val_check_interval < 1.0 ( #6075 )
...
* fix bug
* fix tests
* changelog
* fix pep8
* fix tests
* fix and add some tests
* add test for rlop
* chlog
* Update CHANGELOG.md
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
2021-02-24 16:46:33 +05:30
Jirka Borovec
1d9c553b86
prune deprecated Trainer arg `enable_pl_optimizer` ( #6163 )
...
* prune enable_pl_optimizer
* prune automatic_optimization
2021-02-24 10:01:24 +00:00
Jirka Borovec
aba212341a
formatting 4/n: Trainer ( #5720 )
...
* yapf trainer
* Apply suggestions from code review
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* .
* fix
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2021-02-03 09:25:42 +00:00
chaton
0435e23a64
deprecate enable_pl_optimizer as it is not restored properly ( #5244 )
...
* update
* clean test
* still in progress
* udpdate test
* update
* update
* resolve flake
* add test for zero_grad
* update
* works without accumulated_grad
* update
* update
* resolve amp
* revert back to True
* update
* clean tests
* cleaned out
* typo
* update test
* git repare bug
* remove print
* udpate
* Fix formatting/optimizer imports
* Refactor the test for cleanliness
* Add vanilla model to the test, better var names
* Fixed var names, let's clean up these mock tests
* repare test
* update test
* resolve flake8
* add manual_optimization
* update tests
* resolve flake8
* add random accumulate_grad_batches
* improve test
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update
* clean tests
* correct bug
* Apply suggestions from code review
* format
* adress comments
* update on comments
* wip
* typo
* depreceate enable_pl_optimizer
* resolve latest bugs
* update
* resolve merge
* add comment
* Update pytorch_lightning/core/lightning.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/deprecated_api/test_remove_1-3.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/trainer/connectors/optimizer_connector.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update on comments
* update restore
* add a property
* remove setstate as not needed anymore
* update test
* provide optimizer to on_before_zero_grad
* update on comments
* update on comments
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update tests/trainer/optimization/test_parity_automatic_optimization.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* mofidy import
* update changelog
* resolve flake8
* update
* update
* clean doc
Co-authored-by: SeanNaren <sean@grid.ai>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-62-109.ec2.internal>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
(cherry picked from commit f2e99d617f
)
2021-01-26 14:29:46 +01:00
chaton
c2e6e68c7e
optimizer clean up ( #4658 )
...
* add LightningOptimizer
* typo
* add mock closure
* typo
* remove logic in optimizer_step
* update
* update
* update
* desactivate LightningOptimizer for hovorod
* resolve flake
* typo
* check optimizer name
* change name
* added backward to LightningOptimizer
* remove use_lightning_optimizer
* move update
* simplify init
* resolve comments
* resolve bug
* update
* update
* resolve bugs
* resolve flake8
* set state
* work manual_optimizer_step
* add doc
* add enable_pl_optimizer
* make optimizer_step
* add make_optimizer_step
* add examples
* resolve test
* add test_optimizer_return_options_enable_pl_optimizer
* add enable_pl_optimizer=True
* update
* update tests
* resolve bugs
* update
* set Trainer to False
* update
* resolve bugs
* update
* remove from doc
* resolve bug
* typo
* update
* set to True
* simplification
* typo
* resolve horovod
* unwrap horovod
* remove Optimizer
* resolve horovod
* move logic to amp_backend
* doesn't seem to be pickable
* update
* add again
* resolve some bugs
* cleanup
* resolve bug with AMP
* change __repr__
* round at -12
* udpate
* update
* update
* remove from horovod
* typo
* add convert_to_lightning_optimizers in each accelerators
* typo
* forgot
* forgot a convert_to_lightning_optimizers
* update
* update
* update
* increase coverage
* update
* resolve flake8
* update
* remove useless code
* resolve comments + add support for LightningOptimizer base class
* resolve flake
* check optimizer get wrapped back
* resolve DDPSharded
* reduce code
* lightningoptimizer
* Update pytorch_lightning/core/optimizer.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Update pytorch_lightning/core/lightning.py
* remove reference to step function
* Apply suggestions from code review
* update on comments
* resolve
* Update CHANGELOG.md
* add back training_step in apex and native_amp
* rename optimizer_step
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2020-12-01 00:09:46 +00:00
Carlos Mocholí
2549ca40e6
Clean up optimizer code ( #3587 )
...
* Update optimizer code
* Update CHANGELOG
* Fix tuple of one list case
* Update docs
* Fix pep issue
* Minor typo [skip-ci]
* Use minimal match
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Apply suggestions from code review
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2020-10-21 21:12:48 +02:00
Carlos Mocholí
e0f9799dbf
Add strict option to lr_scheduler dict ( #3586 )
...
* Add strict option to lr_scheduler dict
* Update docs
* Unnecessary "else" after "raise"
* Update CHANGELOG
* Fix rebase
2020-10-21 14:14:37 +02:00
William Falcon
d9bc95f83e
ref: bug fix with logging val epoch end + monitor ( #3812 )
...
* ref: fix metric err
* ref: fix metric err
* ref: fix metric err
* ref: merge
* ref: merge
* ref: merge
* ref: merge
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: decoupled ddp2
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
* ref: clean up ddp before final fix
2020-10-03 12:33:29 -04:00
Rohit Gupta
783750547d
disable optimizers setup during testing ( #3059 )
...
* disable configure_optimizers during testing
* minor changes
* hvd and ddp
* fix precision during testing
* fix ddp
* fix amp
* fix cpu
* update dp
* simplify optimizers
* add test
* codefactor
* ref optimizer setup
* chlog
* suggestions
* isort
* rebased with master
2020-09-29 01:09:04 +02:00
William Falcon
ff0064f956
ref: group connectors ( #3472 )
...
* ref: accelerator connector methods 3/n
* ref: accelerator connector methods 3/n
2020-09-11 23:33:09 -04:00