lightning/pytorch_lightning/plugins/precision
Carlos Mocholí 8208c330eb
Use `torch.nn.utils.clip_grad_norm_` and add `clip_grad_by_value` support for TPU (#7025)
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
2021-05-07 16:41:39 +00:00
..
__init__.py Feature/double precision (#6595) 2021-03-24 15:47:58 +05:30
apex_amp.py [bugfix] Apex never instantiated. (#7274) 2021-04-30 13:16:28 -04:00
deepspeed_precision.py Add back `clip_gradients(model)` (#7231) 2021-04-27 11:34:02 +00:00
double.py Fix mypy checks for double precision plugin (#7151) 2021-04-22 11:29:38 +01:00
mixed.py Use `torch.nn.utils.clip_grad_norm_` and add `clip_grad_by_value` support for TPU (#7025) 2021-05-07 16:41:39 +00:00
native_amp.py [bugfix] Add set_default_tensor_type to torch.DoubleTensor with precision=64 (#7108) 2021-04-20 15:25:37 +00:00
precision_plugin.py Use `torch.nn.utils.clip_grad_norm_` and add `clip_grad_by_value` support for TPU (#7025) 2021-05-07 16:41:39 +00:00
sharded_native_amp.py Typing for accelerators and plugins (#7022) 2021-04-15 16:48:16 +00:00
tpu_bfloat.py Create pytorch_lightning/utilities/types.py (#7048) 2021-04-19 14:43:16 +02:00