2912239fe6
* add check_model_configuration method * trying to fix errors * trying to fix tests * added test_epoch_end to lightning template * fix tests * fix new test after rebase * fix spelling * added more checks * updated formating * added tests * fixed CHANGELOG * Apply suggestions from code review * move test to new module * change check on configure_optimizers Co-authored-by: Nicki Skafte <nugginea@gmail.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> |
||
---|---|---|
.. | ||
README.md | ||
__init__.py | ||
cpu_template.py | ||
gpu_template.py | ||
lightning_module_template.py |
README.md
Basic Examples
Use these examples to test how lightning works.
Test on CPU
python cpu_template.py
Train on a single GPU
python gpu_template.py --gpus 1
DataParallel (dp)
Train on multiple GPUs using DataParallel.
python gpu_template.py --gpus 2 --distributed_backend dp
DistributedDataParallel (ddp)
Train on multiple GPUs using DistributedDataParallel
python gpu_template.py --gpus 2 --distributed_backend ddp
DistributedDataParallel+DP (ddp2)
Train on multiple GPUs using DistributedDataParallel + dataparallel. On a single node, uses all GPUs for 1 model. Then shares gradient information across nodes.
python gpu_template.py --gpus 2 --distributed_backend ddp2