19 KiB
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
Website • Key Features • How To Use • Docs • Examples • Community • Grid AI • License
*Codecov is > 90%+ but build delays may show less
PyTorch Lightning is just organized PyTorch
Lightning disentangles PyTorch code to decouple the science from the engineering.
Lightning Design Philosophy
Lightning structures PyTorch code with these principles:
Lightning forces the following structure to your code which makes it reusable and shareable:
- Research code (the LightningModule).
- Engineering code (you delete, and is handled by the Trainer).
- Non-essential research code (logging, etc... this goes in Callbacks).
- Data (use PyTorch DataLoaders or organize them into a LightningDataModule).
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
Get started with our 2 step guide
Continuous Integration
Lightning is rigorously tested across multiple GPUs, TPUs CPUs and against major Python and PyTorch versions.
Current build statuses
- ** tests run on two NVIDIA P100
- *** tests run on Google GKE TPUv2/3
- TPU py3.7 means we support Colab and Kaggle env.
How To Use
Step 0: Install
Simple installation from PyPI
pip install pytorch-lightning
Other installation options
Install with optional dependencies
pip install pytorch-lightning['extra']
Conda
conda install pytorch-lightning -c conda-forge
Install stable 1.4.x
the actual status of 1.4 [stable] is following:
Install future release from the source
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@release/1.4.x --upgrade
Install bleeding-edge - future 1.5
Install nightly from the source (no guarantees)
pip install https://github.com/PyTorchLightning/pytorch-lightning/archive/master.zip
or from testing PyPI
pip install -iU https://test.pypi.org/simple/ pytorch-lightning
Step 1: Add these imports
import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl
Step 2: Define a LightningModule (nn.Module subclass)
A LightningModule defines a full system (ie: a GAN, autoencoder, BERT or a simple Image Classifier).
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))
def forward(self, x):
# in lightning, forward defines the prediction/inference actions
embedding = self.encoder(x)
return embedding
def training_step(self, batch, batch_idx):
# training_step defines the train loop. It is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
self.log("train_loss", loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
Note: Training_step defines the training loop. Forward defines how the LightningModule behaves during inference/prediction.
Step 3: Train!
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])
autoencoder = LitAutoEncoder()
trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))
Advanced features
Lightning has over 40+ advanced features designed for professional AI research at scale.
Here are some examples:
Highlighted feature code snippets
# 8 GPUs
# no code changes needed
trainer = Trainer(max_epochs=1, gpus=8)
# 256 GPUs
trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
Train on TPUs without code changes
# no code changes needed
trainer = Trainer(tpu_cores=8)
16-bit precision
# no code changes needed
trainer = Trainer(precision=16)
Experiment managers
from pytorch_lightning import loggers
# tensorboard
trainer = Trainer(logger=TensorBoardLogger('logs/'))
# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())
# comet
trainer = Trainer(logger=loggers.CometLogger())
# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())
# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())
# ... and dozens more
EarlyStopping
es = EarlyStopping(monitor='val_loss')
trainer = Trainer(callbacks=[es])
Checkpointing
checkpointing = ModelCheckpoint(monitor='val_loss')
trainer = Trainer(callbacks=[checkpointing])
Export to torchscript (JIT) (production use)
# torchscript
autoencoder = LitAutoEncoder()
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
Export to ONNX (production use)
# onnx
with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
autoencoder = LitAutoEncoder()
input_sample = torch.randn((1, 64))
autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)
os.path.isfile(tmpfile.name)
Pro-level control of training loops (advanced users)
For complex/professional level work, you have optional full control of the training loop and optimizers.
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.automatic_optimization = False
def training_step(self, batch, batch_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
opt_a, opt_b = self.optimizers(use_pl_optimizer=True)
loss_a = ...
self.manual_backward(loss_a, opt_a)
opt_a.step()
opt_a.zero_grad()
loss_b = ...
self.manual_backward(loss_b, opt_b, retain_graph=True)
self.manual_backward(loss_b, opt_b)
opt_b.step()
opt_b.zero_grad()
Advantages over unstructured PyTorch
- Models become hardware agnostic
- Code is clear to read because engineering code is abstracted away
- Easier to reproduce
- Make fewer mistakes because lightning handles the tricky engineering
- Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
- Lightning has dozens of integrations with popular machine learning tools.
- Tested rigorously with every new PR. We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
- Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).
Examples
Hello world
Contrastive Learning
NLP
Reinforcement Learning
Vision
Classic ML
Community
The lightning community is maintained by
- 10+ core contributors who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
- 480+ active community contributors.
Want to help us build Lightning and reduce boilerplate for thousands of researchers? Learn how to make your first contribution here
Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.
Asking for help
If you have any questions please:
Funding
We're venture funded to make sure we can provide around the clock support, hire a full-time staff, attend conferences, and move faster through implementing features you request.
Grid AI
Grid AI is our platform for training models at scale on the cloud!
Sign up for our FREE community Tier here
To use grid, take your regular command:
python my_model.py --learning_rate 1e-6 --layers 2 --gpus 4
And change it to use the grid train command:
grid train --grid_gpus 4 my_model.py --learning_rate 'uniform(1e-6, 1e-1, 20)' --layers '[2, 4, 8, 16]'
The above command will launch (20 * 4) experiments each running on 4 GPUs (320 GPUs!) - by making ZERO changes to your code.
Licence
Please observe the Apache 2.0 license that is listed in this repository. In addition, the Lightning framework is Patent Pending.
BibTeX
If you want to cite the framework feel free to use this (but only if you loved it 😊) or zenodo:
@article{falcon2019pytorch,
title={PyTorch Lightning},
author={Falcon, WA, et al.},
journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning},
volume={3},
year={2019}
}