lightning/pytorch_lightning/callbacks
thomas chaton 063f5ba73e
[bugfix] Re-compute accumulated_grad_batches (#8493)
* resolve resolution

* update changelog

* typo

* optimize test

* update on comments

* resolve comments

* update
2021-07-21 10:46:25 +00:00
..
__init__.py Add XLAStatsMonitor Callback (#8235) 2021-07-05 17:09:46 +05:30
base.py Add the `on_before_optimizer_step` hook (#8048) 2021-07-09 13:30:52 +02:00
early_stopping.py `every_n_val_epochs` -> `every_n_epochs` (#8383) 2021-07-13 01:20:20 +02:00
finetuning.py Fix module dict in base finetuning (#8170) 2021-06-28 10:55:32 +00:00
gpu_stats_monitor.py Fix: handle logical CUDA device IDs for GPUStatsMonitor if `CUDA_VISIBLE_DEVICES` set (#8260) 2021-07-19 11:42:43 +00:00
gradient_accumulation_scheduler.py [bugfix] Re-compute accumulated_grad_batches (#8493) 2021-07-21 10:46:25 +00:00
lambda_function.py Add the `on_before_optimizer_step` hook (#8048) 2021-07-09 13:30:52 +02:00
lr_monitor.py Remove unnecessary generator (#8470) 2021-07-19 14:02:07 +00:00
model_checkpoint.py Remove `check_checkpoint_callback` (#7724) 2021-07-19 11:29:00 +00:00
prediction_writer.py Loop Refactor 5/N - Prediction Loop (#7700) 2021-06-23 10:17:04 +01:00
progress.py rename old `Trainer.train_loop` -> `Trainer.fit_loop` (#8025) 2021-06-22 11:49:32 +02:00
pruning.py [bugfix] Prevent deepcopy of dataloaders / Trainer in SWA Callback (#8472) 2021-07-20 18:31:49 +00:00
quantization.py Set minimum PyTorch version to 1.6 (#8288) 2021-07-13 17:12:49 +00:00
stochastic_weight_avg.py [bugfix] Prevent deepcopy of dataloaders / Trainer in SWA Callback (#8472) 2021-07-20 18:31:49 +00:00
timer.py fix info message when max training time reached (#7780) 2021-05-31 14:50:16 +02:00
xla_stats_monitor.py Add XLAStatsMonitor Callback (#8235) 2021-07-05 17:09:46 +05:30