Carlos Mocholí
8ba6304c73
Increment the total batch idx before the accumulation early exit ( #7692 )
...
* Increment the total batch idx before the accumulation early exit
* Update CHANGELOG
2021-05-25 10:23:40 +02:00
Akihiro Nitta
710b144b9b
Restore `trainer.current_epoch` after tuning ( #7434 )
...
* Add a test
* Save and restore current_epoch
* Update CHANGELOG
* alphabetical order
2021-05-08 07:15:52 +02:00
Carlos Mocholí
5af086ab9f
Attach data refactor and tuner bugs [4/n] ( #7258 )
...
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
2021-04-30 13:54:58 +00:00
Carlos Mocholí
a5ac3f8a16
Code cleaning in preparation for #7258 [3/n] ( #7262 )
2021-04-29 14:40:51 +02:00
ananthsub
947d1cb757
[1/2] Add support for early stopping during training epoch end ( #6944 )
...
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: jirka <jirka.borovec@seznam.cz>
2021-04-28 15:18:56 +02:00
Adrian Wälchli
3b36d81c03
Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling ( #7014 )
...
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: thomas chaton <thomas@grid.ai>
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
2021-04-27 09:51:39 +00:00
Akihiro Nitta
92af363270
Fix `lr_finder` suggesting too high learning rates ( #7076 )
...
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-04-23 10:59:40 +00:00