2019-08-06 20:37:58 +00:00
< div align = "center" >
2021-02-03 15:08:19 +00:00
< img src = "docs/source/_static/images/logo.png" width = "400px" >
2019-03-31 19:32:35 +00:00
2019-08-05 20:02:48 +00:00
2020-12-24 08:15:24 +00:00
**The lightweight PyTorch wrapper for high-performance AI research.
2020-10-08 11:20:43 +00:00
Scale your models, not the boilerplate.**
2019-08-05 20:02:48 +00:00
2020-10-08 11:28:32 +00:00
---
2020-08-20 16:45:40 +00:00
< p align = "center" >
2020-10-08 13:17:51 +00:00
< a href = "https://www.pytorchlightning.ai/" > Website< / a > •
2020-08-20 16:45:40 +00:00
< a href = "#key-features" > Key Features< / a > •
< a href = "#how-to-use" > How To Use< / a > •
2020-09-21 20:31:54 +00:00
< a href = "https://pytorch-lightning.readthedocs.io/en/stable/" > Docs< / a > •
2020-09-21 20:34:55 +00:00
< a href = "#examples" > Examples< / a > •
2020-08-20 16:45:40 +00:00
< a href = "#community" > Community< / a > •
2020-10-11 18:20:07 +00:00
< a href = "#grid-ai" > Grid AI< / a > •
2021-02-16 23:54:04 +00:00
< a href = "#license" > License< / a >
2020-08-20 16:45:40 +00:00
< / p >
2020-09-15 18:32:27 +00:00
<!-- DO NOT ADD CONDA DOWNLOADS... README CHANGES MUST BE APPROVED BY EDEN OR WILL -->
2020-09-14 08:35:14 +00:00
[![PyPI - Python Version ](https://img.shields.io/pypi/pyversions/pytorch-lightning )](https://pypi.org/project/pytorch-lightning/)
2019-08-05 20:02:48 +00:00
[![PyPI Status ](https://badge.fury.io/py/pytorch-lightning.svg )](https://badge.fury.io/py/pytorch-lightning)
2019-08-18 23:17:25 +00:00
[![PyPI Status ](https://pepy.tech/badge/pytorch-lightning )](https://pepy.tech/project/pytorch-lightning)
2020-09-02 15:50:02 +00:00
[![Conda ](https://img.shields.io/conda/v/conda-forge/pytorch-lightning?label=conda&color=success )](https://anaconda.org/conda-forge/pytorch-lightning)
2020-08-14 20:05:53 +00:00
[![DockerHub ](https://img.shields.io/docker/pulls/pytorchlightning/pytorch_lightning.svg )](https://hub.docker.com/r/pytorchlightning/pytorch_lightning)
2020-03-14 17:01:57 +00:00
[![codecov ](https://codecov.io/gh/PyTorchLightning/pytorch-lightning/branch/master/graph/badge.svg )](https://codecov.io/gh/PyTorchLightning/pytorch-lightning)
2020-04-27 21:41:46 +00:00
2020-05-15 12:36:40 +00:00
[![ReadTheDocs ](https://readthedocs.org/projects/pytorch-lightning/badge/?version=stable )](https://pytorch-lightning.readthedocs.io/en/stable/)
2021-05-10 08:50:14 +00:00
[![Slack ](https://img.shields.io/badge/slack-chat-green.svg?logo=slack )](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-pw5v393p-qRaDgEk24~EjiZNBpSQFgQ)
2020-01-14 12:05:26 +00:00
[![license ](https://img.shields.io/badge/License-Apache%202.0-blue.svg )](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
2019-10-06 16:20:13 +00:00
2020-03-27 12:36:50 +00:00
<!--
2020-06-03 16:23:14 +00:00
[![CodeFactor ](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge )](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
2019-08-15 13:54:29 +00:00
-->
2019-08-06 20:37:58 +00:00
< / div >
2019-08-05 20:02:48 +00:00
2020-08-20 15:45:28 +00:00
###### *Codecov is > 90%+ but build delays may show less
2020-09-21 21:04:05 +00:00
---
2020-08-20 16:45:40 +00:00
## PyTorch Lightning is just organized PyTorch
2020-09-21 20:42:38 +00:00
Lightning disentangles PyTorch code to decouple the science from the engineering.
2021-02-03 15:08:19 +00:00
![PT to PL ](docs/source/_static/images/general/pl_quick_start_full_compressed.gif )
2020-08-20 16:45:40 +00:00
2020-09-21 20:42:38 +00:00
---
2021-02-13 18:43:29 +00:00
## Lightning Design Philosophy
Lightning structures PyTorch code with these principles:
2020-09-21 20:42:38 +00:00
2021-02-13 18:43:29 +00:00
< div align = "center" >
< img src = "https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/philosophies.jpg" max-height = "250px" >
< / div >
2020-09-21 20:44:44 +00:00
2021-02-13 18:43:29 +00:00
Lightning forces the following structure to your code which makes it reusable and shareable:
- Research code (the LightningModule).
- Engineering code (you delete, and is handled by the Trainer).
- Non-essential research code (logging, etc... this goes in Callbacks).
2021-05-04 20:37:26 +00:00
- Data (use PyTorch DataLoaders or organize them into a LightningDataModule).
2020-08-20 16:45:40 +00:00
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
2021-02-18 18:40:07 +00:00
Get started with our [2 step guide ](https://pytorch-lightning.readthedocs.io/en/latest/starter/new-project.html )
2020-08-20 16:45:40 +00:00
2020-07-25 15:22:08 +00:00
---
2020-10-08 11:27:13 +00:00
2020-02-14 11:49:32 +00:00
## Continuous Integration
2021-05-04 20:37:26 +00:00
Lightning is rigorously tested across multiple GPUs, TPUs CPUs and against major Python and PyTorch versions.
2021-02-13 18:43:29 +00:00
< details >
< summary > Current build statuses< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 18:43:29 +00:00
< center >
2021-05-06 23:06:21 +00:00
| System / PyTorch ver. | 1.4 (min. req.) | 1.5 | 1.6 | 1.7 | 1.8 (latest) | 1.9 (nightly) |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Conda py3.7 [linux] | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) |
| Linux py3.7 [GPUs**] | - | - | [![Build Status ](https://dev.azure.com/PytorchLightning/pytorch-lightning/_apis/build/status/PL.pytorch-lightning%20(GPUs )?branchName=master)](https://dev.azure.com/PytorchLightning/pytorch-lightning/_build/latest?definitionId=6& branchName=master) | - | - | - |
| Linux py3.{6,7} [TPUs***] | - | - | [![TPU tests ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22TPU+tests%22+branch%3Amaster) | - | [![TPU tests ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22TPU+tests%22+branch%3Amaster) | - |
| Linux py3.{6,7,8,9} | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - |
| OSX py3.{6,7,8,9} | - | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - |
| Windows py3.{6,7,8,9} | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=master&event=push )](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - |
2021-02-13 18:43:29 +00:00
2021-05-04 20:37:26 +00:00
- _\** tests run on two NVIDIA P100_
2021-02-13 18:43:29 +00:00
- _\*** tests run on Google GKE TPUv2/3_
2021-05-04 20:37:26 +00:00
- _TPU py3.7 means we support Colab and Kaggle env._
2021-02-13 18:43:29 +00:00
< / center >
< / details >
2020-02-14 11:49:32 +00:00
2020-08-08 00:21:51 +00:00
---
2020-08-20 15:45:28 +00:00
2020-08-20 16:45:40 +00:00
## How To Use
2020-12-23 20:29:00 +00:00
### Step 0: Install
2020-09-21 21:06:40 +00:00
2020-08-20 16:45:40 +00:00
Simple installation from PyPI
```bash
pip install pytorch-lightning
```
2021-02-16 18:46:28 +00:00
<!-- following section will be skipped from PyPI description -->
2021-02-13 17:13:14 +00:00
< details >
2021-02-13 18:50:18 +00:00
< summary > Other installation options< / summary >
2021-02-13 17:13:14 +00:00
<!-- following section will be skipped from PyPI description -->
2020-12-23 22:37:33 +00:00
2021-02-16 18:46:28 +00:00
#### Install with optional dependencies
2021-02-18 19:51:56 +00:00
2021-02-13 18:50:18 +00:00
```bash
pip install pytorch-lightning['extra']
```
2021-02-18 19:51:56 +00:00
2021-02-13 18:50:18 +00:00
#### Conda
2021-02-18 19:51:56 +00:00
2021-02-13 18:50:18 +00:00
```bash
conda install pytorch-lightning -c conda-forge
```
2020-12-23 20:29:00 +00:00
2021-05-12 11:36:52 +00:00
#### Install stable 1.3.x
2021-02-16 18:46:28 +00:00
2021-05-12 11:36:52 +00:00
the actual status of 1.3 [stable] is following:
2021-02-16 18:46:28 +00:00
2021-05-12 11:36:52 +00:00
![CI base testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20base%20testing/badge.svg?branch=release%2F1.3.x&event=push )
![CI complete testing ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=release%2F1.3.x&event=push )
![PyTorch & Conda ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=release%2F1.3.x&event=push )
![TPU tests ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg?branch=release%2F1.3.x&event=push )
![Docs check ](https://github.com/PyTorchLightning/pytorch-lightning/workflows/Docs%20check/badge.svg?branch=release%2F1.3.x&event=push )
2021-02-16 18:46:28 +00:00
Install future release from the source
```bash
2021-05-12 11:36:52 +00:00
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@release/1.3.x --upgrade
2021-02-16 18:46:28 +00:00
```
2021-05-12 11:36:52 +00:00
#### Install bleeding-edge - future 1.4
2020-12-23 20:29:00 +00:00
2021-02-16 18:46:28 +00:00
Install nightly from the source (no guarantees)
2021-02-13 17:13:14 +00:00
```bash
2021-02-16 18:46:28 +00:00
pip install https://github.com/PyTorchLightning/pytorch-lightning/archive/master.zip
2021-02-13 17:13:14 +00:00
```
2021-02-16 18:46:28 +00:00
or from testing PyPI
2021-02-13 17:13:14 +00:00
```bash
pip install -iU https://test.pypi.org/simple/ pytorch-lightning
```
< / details >
2021-02-16 18:46:28 +00:00
<!-- end skipping PyPI description -->
2020-12-23 22:37:33 +00:00
2020-12-23 20:29:00 +00:00
### Step 1: Add these imports
2020-05-12 12:46:22 +00:00
```python
2020-08-20 15:45:28 +00:00
import os
import torch
2020-09-21 15:17:59 +00:00
from torch import nn
2020-08-20 15:45:28 +00:00
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl
```
2020-05-12 12:46:22 +00:00
2020-12-23 20:29:00 +00:00
### Step 2: Define a LightningModule (nn.Module subclass)
2020-09-21 15:17:59 +00:00
A LightningModule defines a full *system* (ie: a GAN, autoencoder, BERT or a simple Image Classifier).
2020-08-20 15:45:28 +00:00
```python
2020-09-21 15:17:59 +00:00
class LitAutoEncoder(pl.LightningModule):
2020-05-12 12:46:22 +00:00
def __init__ (self):
2020-05-12 12:59:23 +00:00
super().__init__()
2020-09-21 15:17:59 +00:00
self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))
2020-12-24 08:15:24 +00:00
2020-09-22 18:00:02 +00:00
def forward(self, x):
# in lightning, forward defines the prediction/inference actions
embedding = self.encoder(x)
return embedding
2020-05-12 12:46:22 +00:00
2020-08-20 15:45:28 +00:00
def training_step(self, batch, batch_idx):
2021-04-13 09:20:33 +00:00
# training_step defines the train loop. It is independent of forward
2020-08-20 15:45:28 +00:00
x, y = batch
2020-09-21 15:17:59 +00:00
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
2020-10-09 23:11:54 +00:00
self.log('train_loss', loss)
2020-09-21 15:17:59 +00:00
return loss
2020-05-12 12:46:22 +00:00
def configure_optimizers(self):
2020-09-21 15:17:59 +00:00
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
```
2020-05-12 12:46:22 +00:00
2020-12-23 20:29:00 +00:00
**Note: Training_step defines the training loop. Forward defines how the LightningModule behaves during inference/prediction.**
2020-09-22 18:00:02 +00:00
2020-12-23 20:29:00 +00:00
### Step 3: Train!
2020-09-21 15:17:59 +00:00
```python
2020-08-20 15:45:28 +00:00
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])
2020-05-12 12:46:22 +00:00
2020-09-21 15:17:59 +00:00
autoencoder = LitAutoEncoder()
trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))
2020-05-12 12:46:22 +00:00
```
2021-02-13 17:31:03 +00:00
## Advanced features
2021-02-18 18:40:07 +00:00
Lightning has over [40+ advanced features ](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags ) designed for professional AI research at scale.
2021-02-13 17:27:44 +00:00
Here are some examples:
2021-02-13 19:41:38 +00:00
< div align = "center" >
< img src = "https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/features_2.jpg" max-height = "600px" >
< / div >
2021-02-13 17:27:44 +00:00
2021-02-13 17:21:31 +00:00
< details >
2021-02-13 19:41:38 +00:00
< summary > Highlighted feature code snippets< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 17:27:44 +00:00
```python
2021-02-13 19:01:34 +00:00
# 8 GPUs
# no code changes needed
trainer = Trainer(max_epochs=1, gpus=8)
2021-02-13 17:21:31 +00:00
2021-02-13 19:01:34 +00:00
# 256 GPUs
trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
```
2020-08-20 16:45:40 +00:00
2021-02-13 17:21:31 +00:00
< summary > Train on TPUs without code changes< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 17:27:44 +00:00
```python
2021-02-13 19:01:34 +00:00
# no code changes needed
2021-02-13 17:27:44 +00:00
trainer = Trainer(tpu_cores=8)
```
2020-08-20 16:45:40 +00:00
2021-02-13 17:27:44 +00:00
< summary > 16-bit precision< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 17:27:44 +00:00
```python
2021-02-13 19:01:34 +00:00
# no code changes needed
2021-02-13 17:27:44 +00:00
trainer = Trainer(precision=16)
```
2021-02-13 19:01:34 +00:00
< summary > Experiment managers< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
```python
from pytorch_lightning import loggers
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# tensorboard
trainer = Trainer(logger=TensorBoardLogger('logs/'))
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# comet
trainer = Trainer(logger=loggers.CometLogger())
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
# ... and dozens more
```
< summary > EarlyStopping< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
```python
es = EarlyStopping(monitor='val_loss')
trainer = Trainer(callbacks=[es])
```
< summary > Checkpointing< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 19:01:34 +00:00
```python
checkpointing = ModelCheckpoint(monitor='val_loss')
trainer = Trainer(callbacks=[checkpointing])
```
2021-02-13 17:27:44 +00:00
< summary > Export to torchscript (JIT) (production use)< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 17:27:44 +00:00
```python
# torchscript
autoencoder = LitAutoEncoder()
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
```
< summary > Export to ONNX (production use)< / summary >
2021-02-18 19:51:56 +00:00
2021-02-13 17:27:44 +00:00
```python
# onnx
with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
autoencoder = LitAutoEncoder()
input_sample = torch.randn((1, 64))
autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)
os.path.isfile(tmpfile.name)
```
< / details >
2020-08-20 16:45:40 +00:00
2021-02-13 17:31:03 +00:00
### Pro-level control of training loops (advanced users)
For complex/professional level work, you have optional full control of the training loop and optimizers.
2020-10-11 02:48:50 +00:00
```python
class LitAutoEncoder(pl.LightningModule):
2021-02-18 19:51:56 +00:00
def __init__ (self):
super().__init__()
self.automatic_optimization = False
2021-03-07 07:48:50 +00:00
def training_step(self, batch, batch_idx):
2021-01-08 21:13:12 +00:00
# access your optimizers with use_pl_optimizer=False. Default is True
2021-03-07 07:48:50 +00:00
opt_a, opt_b = self.optimizers(use_pl_optimizer=True)
2020-12-24 08:15:24 +00:00
2020-10-11 02:48:50 +00:00
loss_a = ...
self.manual_backward(loss_a, opt_a)
opt_a.step()
opt_a.zero_grad()
2020-12-24 08:15:24 +00:00
2020-10-11 02:48:50 +00:00
loss_b = ...
self.manual_backward(loss_b, opt_b, retain_graph=True)
self.manual_backward(loss_b, opt_b)
opt_b.step()
opt_b.zero_grad()
```
2020-09-21 15:19:29 +00:00
---
2021-02-13 19:01:34 +00:00
## Advantages over unstructured PyTorch
2020-09-21 15:19:29 +00:00
2021-02-13 19:01:34 +00:00
* Models become hardware agnostic
* Code is clear to read because engineering code is abstracted away
2020-09-21 15:19:29 +00:00
* Easier to reproduce
2021-02-13 19:01:34 +00:00
* Make fewer mistakes because lightning handles the tricky engineering
2020-09-21 15:19:29 +00:00
* Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
2021-02-13 19:01:34 +00:00
* Lightning has dozens of integrations with popular machine learning tools.
2020-09-21 15:19:29 +00:00
* [Tested rigorously with every new PR ](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests ). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
* Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).
2020-08-20 15:45:28 +00:00
---
2020-09-21 20:34:55 +00:00
## Examples
2020-08-20 16:45:40 +00:00
2020-08-20 15:45:28 +00:00
###### Hello world
2020-12-24 08:15:24 +00:00
- [MNIST hello world ](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/01-mnist-hello-world.ipynb )
- [MNIST on TPUs ](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/06-mnist-tpu-training.ipynb )
2020-08-20 15:45:28 +00:00
###### Contrastive Learning
2021-03-30 17:22:59 +00:00
- [BYOL ](https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#byol )
- [CPC v2 ](https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#cpc-v2 )
- [Moco v2 ](https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#moco-v2 )
- [SIMCLR ](https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#simclr )
2020-08-20 15:45:28 +00:00
###### NLP
2020-12-24 08:15:24 +00:00
- [BERT ](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb )
2021-03-30 17:22:59 +00:00
- [GPT-2 ](https://lightning-bolts.readthedocs.io/en/latest/convolutional.html#gpt-2 )
2020-08-20 16:45:40 +00:00
2020-08-20 15:45:28 +00:00
###### Reinforcement Learning
2021-03-30 17:22:59 +00:00
- [DQN ](https://lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#dqn-models )
- [Dueling-DQN ](https://lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#dueling-dqn )
- [Reinforce ](https://lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#reinforce )
2020-08-20 15:45:28 +00:00
###### Vision
2020-12-24 08:15:24 +00:00
- [GAN ](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/03-basic-gan.ipynb )
2020-08-20 15:45:28 +00:00
###### Classic ML
2021-03-30 17:22:59 +00:00
- [Logistic Regression ](https://lightning-bolts.readthedocs.io/en/latest/classic_ml.html#logistic-regression )
- [Linear Regression ](https://lightning-bolts.readthedocs.io/en/latest/classic_ml.html#linear-regression )
2020-08-20 16:45:40 +00:00
2020-08-20 15:45:28 +00:00
---
2019-12-12 19:06:20 +00:00
2020-08-20 16:45:40 +00:00
## Community
2020-04-03 14:04:54 +00:00
2020-09-21 20:37:26 +00:00
The lightning community is maintained by
2021-02-18 18:40:07 +00:00
- [10+ core contributors ](https://pytorch-lightning.readthedocs.io/en/latest/governance.html ) who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
2021-02-16 18:46:28 +00:00
- 400+ community contributors.
2020-08-20 16:45:40 +00:00
Lightning is also part of the [PyTorch ecosystem ](https://pytorch.org/ecosystem/ ) which requires projects to have solid testing, documentation and support.
### Asking for help
If you have any questions please:
2021-01-21 21:57:15 +00:00
1. [Read the docs ](https://pytorch-lightning.rtfd.io/en/latest ).
2021-02-24 15:15:57 +00:00
2. [Search through existing Discussions ](https://github.com/PyTorchLightning/pytorch-lightning/discussions ), or [add a new question ](https://github.com/PyTorchLightning/pytorch-lightning/discussions/new )
2021-05-10 08:50:14 +00:00
3. [Join our slack ](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-pw5v393p-qRaDgEk24~EjiZNBpSQFgQ ).
2020-08-20 16:45:40 +00:00
### Funding
2021-02-13 19:44:19 +00:00
[We're venture funded ](https://techcrunch.com/2020/10/08/grid-ai-raises-18-6m-series-a-to-help-ai-researchers-and-engineers-bring-their-models-to-production/ ) to make sure we can provide around the clock support, hire a full-time staff, attend conferences, and move faster through implementing features you request.
2020-06-09 11:43:33 +00:00
2020-08-20 15:45:28 +00:00
---
2020-10-11 18:20:07 +00:00
## Grid AI
2021-04-21 23:51:21 +00:00
Grid AI is our platform for training models at scale on the cloud!
2020-10-11 18:14:30 +00:00
2021-05-05 02:54:33 +00:00
**Sign up for our FREE community Tier [here ](https://www.grid.ai/pricing/ )**
2020-10-11 18:14:30 +00:00
To use grid, take your regular command:
```
2021-05-04 20:37:26 +00:00
python my_model.py --learning_rate 1e-6 --layers 2 --gpus 4
2020-10-11 18:14:30 +00:00
```
And change it to use the grid train command:
```
2021-05-04 20:37:26 +00:00
grid train --grid_gpus 4 my_model.py --learning_rate 'uniform(1e-6, 1e-1, 20)' --layers '[2, 4, 8, 16]'
2020-10-11 18:14:30 +00:00
```
The above command will launch (20 * 4) experiments each running on 4 GPUs (320 GPUs!) - by making ZERO changes to
your code.
---
2020-08-20 02:03:22 +00:00
## Licence
2020-08-20 16:45:40 +00:00
2021-05-04 20:37:26 +00:00
Please observe the Apache 2.0 license that is listed in this repository.
In addition, the Lightning framework is Patent Pending.
2020-08-20 02:03:22 +00:00
2020-06-08 11:22:54 +00:00
## BibTeX
2021-05-04 20:37:26 +00:00
If you want to cite the framework feel free to use this (but only if you loved it 😊) or [zenodo ](https://zenodo.org/record/3828935#.YC45Lc9Khqs ):
2020-04-28 03:54:20 +00:00
2020-04-03 13:52:41 +00:00
```bibtex
2020-04-28 03:54:20 +00:00
@article {falcon2019pytorch,
title={PyTorch Lightning},
2021-05-01 17:55:07 +00:00
author={Falcon, WA, et al.},
2020-09-14 04:19:09 +00:00
journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning},
2020-04-28 03:54:20 +00:00
volume={3},
year={2019}
2019-11-05 16:53:12 +00:00
}
```