lightning/pl_examples/basic_examples
Jirka Borovec 22a7264e9a
improve partial Codecov (#1172)
* ignore in setup

* show report

* abs imports

* abstract pass

* cover loggers

* doctest trains

* locals

* pass

* revert tensorboard

* use tensorboardX

* revert tensorboardX

* fix trains

* Add TrainsLogger.set_credentials (#1179)

* Add TrainsLogger.set_credentials to control trains server configuration and authentication from code. Sync trains package version.
Fix CI Trains tests

* Add global TrainsLogger set_bypass_mode (#1187)

* Add global TrainsLogger set_bypass_mode skips all external communication

Co-authored-by: bmartinn <>

* rm some no-cov

Co-authored-by: Martin.B <51887611+bmartinn@users.noreply.github.com>
2020-03-19 09:14:29 -04:00
..
README.md changes examples to pl_examples for name connflict 2019-10-19 00:41:17 +02:00
__init__.py changes examples to pl_examples for name connflict 2019-10-19 00:41:17 +02:00
cpu_template.py fix max_epochs setup in basic example (#1105) 2020-03-12 12:42:52 -04:00
gpu_template.py fix max_epochs setup in basic example (#1105) 2020-03-12 12:42:52 -04:00
lightning_module_template.py improve partial Codecov (#1172) 2020-03-19 09:14:29 -04:00

README.md

Basic Examples

Use these examples to test how lightning works.

Test on CPU

python cpu_template.py

Train on a single GPU

python gpu_template.py --gpus 1

DataParallel (dp)

Train on multiple GPUs using DataParallel.

python gpu_template.py --gpus 2 --distributed_backend dp

DistributedDataParallel (ddp)

Train on multiple GPUs using DistributedDataParallel

python gpu_template.py --gpus 2 --distributed_backend ddp

DistributedDataParallel+DP (ddp2)

Train on multiple GPUs using DistributedDataParallel + dataparallel. On a single node, uses all GPUs for 1 model. Then shares gradient information across nodes.

python gpu_template.py --gpus 2 --distributed_backend ddp2