lightning/pytorch_lightning/strategies
ananthsub dfda970572
Update TPU Spawn to use root_device instead of LightningModule's device (#11750)
2022-02-06 06:26:38 +00:00
..
__init__.py Add `Trainer(strategy="bagua")` (#11146) 2022-02-04 17:02:09 +00:00
bagua.py Add `Trainer(strategy="bagua")` (#11146) 2022-02-04 17:02:09 +00:00
ddp.py Remove legacy dead code in DDP script launch (#11678) 2022-02-05 11:40:16 +05:30
ddp2.py Introduce strategies directory for Training Strategies (#11226) 2021-12-22 20:23:30 +00:00
ddp_spawn.py Lazy initialize Strategy.parallel_devices (#11572) 2022-02-03 04:25:16 +00:00
deepspeed.py Rename `Strategy.lr_schedulers` to `Strategy.lr_scheduler_configs` (#11549) 2022-02-02 22:10:01 +00:00
dp.py Remove `Strategy.on_gpu` (#11537) 2022-01-19 21:27:12 +00:00
fully_sharded.py Remove `Strategy.on_gpu` (#11537) 2022-01-19 21:27:12 +00:00
horovod.py Allow Horovod `teardown()` to complete gracefully if exception thrown in callback setup (#11752) 2022-02-05 11:13:21 -08:00
ipu.py Teardown all internal components on exception (#11620) 2022-02-02 21:10:19 +00:00
parallel.py Use the strategy's `root_device` instead of the LightningModule's device property (#11734) 2022-02-05 04:33:25 +01:00
sharded.py Move the `lightning_optimizers` ownership to the `Strategy` (#11444) 2022-01-18 12:58:56 +01:00
sharded_spawn.py Move the `lightning_optimizers` ownership to the `Strategy` (#11444) 2022-01-18 12:58:56 +01:00
single_device.py Remove `Strategy.on_tpu` property (#11536) 2022-01-20 08:25:26 +01:00
single_tpu.py Add typing to accelerators/gpu.py (#11333) 2022-01-12 19:44:51 +00:00
strategy.py Rename `Strategy.lr_schedulers` to `Strategy.lr_scheduler_configs` (#11549) 2022-02-02 22:10:01 +00:00
strategy_registry.py Introduce StrategyRegistry (#11233) 2022-01-05 17:14:18 +05:30
tpu_spawn.py Update TPU Spawn to use root_device instead of LightningModule's device (#11750) 2022-02-06 06:26:38 +00:00
utils.py Introduce strategies directory for Training Strategies (#11226) 2021-12-22 20:23:30 +00:00