lightning/pytorch_lightning/core
Justus Schock 0ec4107697
Optimizer closure (#4190)
* closure for all optimizers

* rename hook and take care of alternating backwards

* add comment

* training_loop_fix

* closure whenever possible

* training_loop

* simple tests that count backward calls

* fix test to work with closure

* remove debugging statement

* better place

* check grads after backward

* start fixing manual optimization

* skip step when result returned by closure was None

* fix gradient clipping test to work with closure

* attribute dict result only for automatic optimization

* adjust backward calls in accelerator

* adjust where to call gradient clipping

* adjust backward calls in tests

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* pass kwargs to xla optimizer

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-10-21 19:34:29 +01:00
..
__init__.py CI: add flake8 (#4239) 2020-10-19 21:20:17 +01:00
datamodule.py Add trainer attribute to datamodule (#3749) 2020-10-01 00:41:19 +05:30
decorators.py added copyright notices (#3062) 2020-08-19 22:03:22 -04:00
grads.py added copyright notices (#3062) 2020-08-19 22:03:22 -04:00
hooks.py ref: decouple apex second attemp part 4/n (#4056) 2020-10-10 12:19:22 -04:00
lightning.py Optimizer closure (#4190) 2020-10-21 19:34:29 +01:00
memory.py Refactor GPUStatsMonitor to improve training speed (#3257) 2020-09-04 06:02:16 -04:00
saving.py Use `Optional` for arguments set to `None` by default (#4164) 2020-10-15 23:02:50 +02:00
step_result.py Use `Optional` for arguments set to `None` by default (#4164) 2020-10-15 23:02:50 +02:00