2021-01-07 05:24:47 +00:00
## Basic Examples
Use these examples to test how lightning works.
2019-10-05 18:13:55 +00:00
2020-09-23 04:19:46 +00:00
#### MNIST
2021-02-16 19:31:07 +00:00
Trains MNIST where the model is defined inside the `LightningModule` .
2019-10-05 18:13:55 +00:00
```bash
2020-09-23 04:19:46 +00:00
# cpu
2021-02-16 19:31:07 +00:00
python simple_image_classifier.py
2019-10-05 18:13:55 +00:00
2020-09-23 04:19:46 +00:00
# gpus (any number)
2021-04-15 15:01:16 +00:00
python simple_image_classifier.py --trainer.gpus 2
2019-10-05 18:13:55 +00:00
2020-09-23 04:19:46 +00:00
# dataparallel
2021-04-15 15:01:16 +00:00
python simple_image_classifier.py --trainer.gpus 2 --trainer.accelerator 'dp'
2020-09-23 04:19:46 +00:00
```
2019-10-05 18:13:55 +00:00
2020-11-06 14:53:46 +00:00
---
#### MNIST with DALI
The MNIST example above using [NVIDIA DALI ](https://developer.nvidia.com/DALI ).
Requires NVIDIA DALI to be installed based on your CUDA version, see [here ](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html ).
```bash
2021-02-16 19:31:07 +00:00
python dali_image_classifier.py
2020-11-06 14:53:46 +00:00
```
---
2020-09-23 04:19:46 +00:00
#### Image classifier
Generic image classifier with an arbitrary backbone (ie: a simple system)
2019-10-05 18:13:55 +00:00
```bash
2020-09-23 04:19:46 +00:00
# cpu
2021-02-16 19:31:07 +00:00
python backbone_image_classifier.py
2019-10-05 18:13:55 +00:00
2020-09-23 04:19:46 +00:00
# gpus (any number)
2021-04-15 15:01:16 +00:00
python backbone_image_classifier.py --trainer.gpus 2
2019-10-05 18:13:55 +00:00
2020-09-23 04:19:46 +00:00
# dataparallel
2021-04-15 15:01:16 +00:00
python backbone_image_classifier.py --trainer.gpus 2 --trainer.accelerator 'dp'
2019-10-05 18:13:55 +00:00
```
2021-01-07 05:24:47 +00:00
---
2020-09-23 04:19:46 +00:00
#### Autoencoder
Showing the power of a system... arbitrarily complex training loops
2019-10-05 18:13:55 +00:00
```bash
2020-09-23 04:19:46 +00:00
# cpu
python autoencoder.py
2020-04-03 21:57:34 +00:00
2020-09-23 04:19:46 +00:00
# gpus (any number)
2021-04-15 15:01:16 +00:00
python autoencoder.py --trainer.gpus 2
2020-04-03 21:57:34 +00:00
2020-09-23 04:19:46 +00:00
# dataparallel
2021-04-15 15:01:16 +00:00
python autoencoder.py --trainer.gpus 2 --trainer.accelerator 'dp'
2020-09-23 04:19:46 +00:00
```
2021-01-07 05:24:47 +00:00
---
# Multi-node example
2020-04-03 21:57:34 +00:00
This demo launches a job using 2 GPUs on 2 different nodes (4 GPUs total).
To run this demo do the following:
2021-01-07 05:24:47 +00:00
1. Log into the jumphost node of your SLURM-managed cluster.
2. Create a conda environment with Lightning and a GPU PyTorch version.
3. Choose a script to submit