22a7264e9a
* ignore in setup * show report * abs imports * abstract pass * cover loggers * doctest trains * locals * pass * revert tensorboard * use tensorboardX * revert tensorboardX * fix trains * Add TrainsLogger.set_credentials (#1179) * Add TrainsLogger.set_credentials to control trains server configuration and authentication from code. Sync trains package version. Fix CI Trains tests * Add global TrainsLogger set_bypass_mode (#1187) * Add global TrainsLogger set_bypass_mode skips all external communication Co-authored-by: bmartinn <> * rm some no-cov Co-authored-by: Martin.B <51887611+bmartinn@users.noreply.github.com> |
||
---|---|---|
.. | ||
README.md | ||
__init__.py | ||
cpu_template.py | ||
gpu_template.py | ||
lightning_module_template.py |
README.md
Basic Examples
Use these examples to test how lightning works.
Test on CPU
python cpu_template.py
Train on a single GPU
python gpu_template.py --gpus 1
DataParallel (dp)
Train on multiple GPUs using DataParallel.
python gpu_template.py --gpus 2 --distributed_backend dp
DistributedDataParallel (ddp)
Train on multiple GPUs using DistributedDataParallel
python gpu_template.py --gpus 2 --distributed_backend ddp
DistributedDataParallel+DP (ddp2)
Train on multiple GPUs using DistributedDataParallel + dataparallel. On a single node, uses all GPUs for 1 model. Then shares gradient information across nodes.
python gpu_template.py --gpus 2 --distributed_backend ddp2