2019-10-05 18:13:55 +00:00
|
|
|
# Basic Examples
|
|
|
|
Use these examples to test how lightning works.
|
|
|
|
|
2019-10-05 18:15:09 +00:00
|
|
|
#### Test on CPU
|
2019-10-05 18:13:55 +00:00
|
|
|
```bash
|
|
|
|
python cpu_template.py
|
|
|
|
```
|
|
|
|
|
2019-10-05 18:15:09 +00:00
|
|
|
---
|
|
|
|
#### Train on a single GPU
|
2019-10-05 18:13:55 +00:00
|
|
|
```bash
|
|
|
|
python gpu_template.py --gpus 1
|
|
|
|
```
|
|
|
|
|
|
|
|
---
|
2019-10-05 18:15:09 +00:00
|
|
|
#### DataParallel (dp)
|
2019-10-05 18:13:55 +00:00
|
|
|
Train on multiple GPUs using DataParallel.
|
|
|
|
|
|
|
|
```bash
|
|
|
|
python gpu_template.py --gpus 2 --distributed_backend dp
|
|
|
|
```
|
|
|
|
|
2019-10-05 18:15:09 +00:00
|
|
|
---
|
|
|
|
#### DistributedDataParallel (ddp)
|
2019-10-05 18:13:55 +00:00
|
|
|
|
|
|
|
Train on multiple GPUs using DistributedDataParallel
|
|
|
|
```bash
|
|
|
|
python gpu_template.py --gpus 2 --distributed_backend ddp
|
|
|
|
```
|
|
|
|
|
2019-10-05 18:15:09 +00:00
|
|
|
---
|
|
|
|
#### DistributedDataParallel+DP (ddp2)
|
2019-10-05 18:13:55 +00:00
|
|
|
|
2020-04-03 19:01:40 +00:00
|
|
|
Train on multiple GPUs using DistributedDataParallel + DataParallel.
|
2019-10-05 18:13:55 +00:00
|
|
|
On a single node, uses all GPUs for 1 model. Then shares gradient information
|
|
|
|
across nodes.
|
|
|
|
```bash
|
|
|
|
python gpu_template.py --gpus 2 --distributed_backend ddp2
|
|
|
|
```
|