lightning/pl_examples/basic_examples
Adrian Wälchli 2c16f1d6b9
remove dataloader patching on the LightningModule (#9764)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-10-20 15:23:20 +02:00
..
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py changes examples to pl_examples for name connflict 2019-10-19 00:41:17 +02:00
autoencoder.py remove dataloader patching on the LightningModule (#9764) 2021-10-20 15:23:20 +02:00
backbone_image_classifier.py remove dataloader patching on the LightningModule (#9764) 2021-10-20 15:23:20 +02:00
dali_image_classifier.py remove dataloader patching on the LightningModule (#9764) 2021-10-20 15:23:20 +02:00
mnist_datamodule.py Add KFold Loop example (#9965) 2021-10-18 16:27:12 +01:00
profiler_example.py [bugfix] Resolve example after LightningCLI update (#9520) 2021-09-15 12:31:41 +05:30
simple_image_classifier.py remove dataloader patching on the LightningModule (#9764) 2021-10-20 15:23:20 +02:00

README.md

Basic Examples

Use these examples to test how lightning works.

MNIST

Trains MNIST where the model is defined inside the LightningModule.

# cpu
python simple_image_classifier.py

# gpus (any number)
python simple_image_classifier.py --trainer.gpus 2

# dataparallel
python simple_image_classifier.py --trainer.gpus 2 --trainer.accelerator 'dp'

MNIST with DALI

The MNIST example above using NVIDIA DALI. Requires NVIDIA DALI to be installed based on your CUDA version, see here.

python dali_image_classifier.py

Image classifier

Generic image classifier with an arbitrary backbone (ie: a simple system)

# cpu
python backbone_image_classifier.py

# gpus (any number)
python backbone_image_classifier.py --trainer.gpus 2

# dataparallel
python backbone_image_classifier.py --trainer.gpus 2 --trainer.accelerator 'dp'

Autoencoder

Showing the power of a system... arbitrarily complex training loops

# cpu
python autoencoder.py

# gpus (any number)
python autoencoder.py --trainer.gpus 2

# dataparallel
python autoencoder.py --trainer.gpus 2 --trainer.accelerator 'dp'

Multi-node example

This demo launches a job using 2 GPUs on 2 different nodes (4 GPUs total). To run this demo do the following:

  1. Log into the jumphost node of your SLURM-managed cluster.
  2. Create a conda environment with Lightning and a GPU PyTorch version.
  3. Choose a script to submit