lightning/pytorch_lightning/trainer
Kaushik B 58b47dabce
Add `log_device_info` to Trainer (#8079)
2021-06-23 09:34:56 +02:00
..
connectors Add `log_device_info` to Trainer (#8079) 2021-06-23 09:34:56 +02:00
__init__.py
callback_hook.py Update fit with val hook test (#8060) 2021-06-21 17:27:37 +00:00
configuration_validator.py [feat] Allow overriding optimizer_zero_grad and/or optimizer_step when using accumulate_grad_batches (#7980) 2021-06-17 12:50:37 +02:00
data_loading.py Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
deprecated_api.py rename old `Trainer.train_loop` -> `Trainer.fit_loop` (#8025) 2021-06-22 11:49:32 +02:00
logging.py Actually show deprecation warnings and their line level [2/2] (#8002) 2021-06-21 18:51:53 +02:00
model_hooks.py Actually show deprecation warnings and their line level [2/2] (#8002) 2021-06-21 18:51:53 +02:00
optimizers.py Update LR schedulers only when their corresponding Optimizer is being… (#4868) 2021-05-04 09:37:40 +00:00
predict_loop.py Improve `LightningDataModule` hook test and fix `dataloader_idx` argument (#7941) 2021-06-14 12:42:13 +00:00
progress.py [2/N] Define dataclasses for progress tracking (#7574) 2021-05-22 03:09:08 +02:00
properties.py Add `log_device_info` to Trainer (#8079) 2021-06-23 09:34:56 +02:00
states.py `TrainerState` refactor [5/5] (#7173) 2021-05-04 12:50:56 +02:00
supporters.py Bugfix/Multiple dataloaders (#7433) 2021-05-11 16:33:29 +02:00
trainer.py Add `log_device_info` to Trainer (#8079) 2021-06-23 09:34:56 +02:00
training_tricks.py Use `torch.nn.utils.clip_grad_norm_` and add `clip_grad_by_value` support for TPU (#7025) 2021-05-07 16:41:39 +00:00