lightning/pl_examples/basic_examples/README.md

66 lines
1.5 KiB
Markdown
Raw Normal View History

## Basic Examples
2019-10-05 18:13:55 +00:00
Use these examples to test how lightning works.
#### MNIST
Trains MNIST where the model is defined inside the LightningModule.
2019-10-05 18:13:55 +00:00
```bash
# cpu
python mnist.py
2019-10-05 18:13:55 +00:00
# gpus (any number)
python mnist.py
2019-10-05 18:13:55 +00:00
# dataparallel
python mnist.py --gpus 2 --distributed_backend 'dp'
```
2019-10-05 18:13:55 +00:00
---
#### Image classifier
Generic image classifier with an arbitrary backbone (ie: a simple system)
2019-10-05 18:13:55 +00:00
```bash
# cpu
python image_classifier.py
2019-10-05 18:13:55 +00:00
# gpus (any number)
python image_classifier.py --gpus 2
2019-10-05 18:13:55 +00:00
# dataparallel
python image_classifier.py --gpus 2 --distributed_backend 'dp'
2019-10-05 18:13:55 +00:00
```
---
#### Autoencoder
Showing the power of a system... arbitrarily complex training loops
2019-10-05 18:13:55 +00:00
```bash
# cpu
python autoencoder.py
# gpus (any number)
python autoencoder.py --gpus 2
# dataparallel
python autoencoder.py --gpus 2 --distributed_backend 'dp'
```
---
# Multi-node example
This demo launches a job using 2 GPUs on 2 different nodes (4 GPUs total).
To run this demo do the following:
1. Log into the jumphost node of your SLURM-managed cluster.
2. Create a conda environment with Lightning and a GPU PyTorch version.
3. Choose a script to submit
#### DDP
Submit this job to run with DistributedDataParallel (2 nodes, 2 gpus each)
```bash
sbatch submit_ddp_job.sh YourEnv
```
#### DDP2
Submit this job to run with a different implementation of DistributedDataParallel.
In this version, each node acts like DataParallel but syncs across nodes like DDP.
```bash
sbatch submit_ddp2_job.sh YourEnv
```