.. |
__init__.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
auto_mix_precision.py
|
support for native amp (#1561)
|
2020-04-23 14:47:08 -04:00 |
callback_config.py
|
Fix typo (#1750)
|
2020-05-07 09:25:54 -04:00 |
callback_hook.py
|
remove extra kwargs from Trainer init (#1820)
|
2020-05-17 09:14:54 -04:00 |
data_loading.py
|
add warning for shuffling in test/val (#1865)
|
2020-05-18 09:53:02 -04:00 |
deprecated_api.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
distrib_data_parallel.py
|
remove obsolete self._device in Trainer (#1849)
|
2020-05-17 08:20:51 -04:00 |
distrib_parts.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
evaluation_loop.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
ignored_warnings.py
|
Dim 0 warning (#256)
|
2019-09-26 13:20:54 -04:00 |
logging.py
|
[WIP] Reduction when batch size < num gpus (#1609)
|
2020-05-02 11:01:44 -04:00 |
lr_finder.py
|
dummy logger (#1836)
|
2020-05-14 10:34:11 -04:00 |
model_hooks.py
|
Fix typo (#1750)
|
2020-05-07 09:25:54 -04:00 |
optimizers.py
|
dataloaders with fast_dev_run (#1787)
|
2020-05-11 23:32:44 -04:00 |
seed.py
|
Option to provide seed to random generators to ensure reproducibility (#1572)
|
2020-05-12 07:53:20 -04:00 |
supporters.py
|
Fixed configure optimizer from dict without "scheduler" key (#1443)
|
2020-04-10 11:43:06 -04:00 |
trainer.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
training_io.py
|
Fix `save_weights_only` flag in ModelCheckpoint (#1780)
|
2020-05-17 09:24:17 -04:00 |
training_loop.py
|
Allow user to select individual TPU core to train on (#1729)
|
2020-05-17 16:30:54 -04:00 |
training_tricks.py
|
dummy logger (#1836)
|
2020-05-14 10:34:11 -04:00 |