0ec4107697
* closure for all optimizers * rename hook and take care of alternating backwards * add comment * training_loop_fix * closure whenever possible * training_loop * simple tests that count backward calls * fix test to work with closure * remove debugging statement * better place * check grads after backward * start fixing manual optimization * skip step when result returned by closure was None * fix gradient clipping test to work with closure * attribute dict result only for automatic optimization * adjust backward calls in accelerator * adjust where to call gradient clipping * adjust backward calls in tests * Apply suggestions from code review Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * pass kwargs to xla optimizer Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
accelerator.py | ||
accelerator_connector.py | ||
cpu_accelerator.py | ||
ddp2_accelerator.py | ||
ddp_accelerator.py | ||
ddp_cpu_slurm_accelerator.py | ||
ddp_cpu_spawn_accelerator.py | ||
ddp_cpu_torchelastic_accelerator.py | ||
ddp_slurm_accelerator.py | ||
ddp_spawn_accelerator.py | ||
ddp_torchelastic_accelerator.py | ||
dp_accelerator.py | ||
gpu_accelerator.py | ||
horovod_accelerator.py | ||
tpu_accelerator.py |