Future 2/n: stand-alone examples (#13294)

* move: pl_examples >> src/

* convert pl_examples package to plain examples

* update CI for examples

* ci

* missing

* install
This commit is contained in:
Jirka Borovec 2022-06-15 14:53:51 +02:00 committed by GitHub
parent 759e89df21
commit 9cc714cdd1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
61 changed files with 667 additions and 491 deletions

View File

@ -106,10 +106,11 @@ jobs:
- script: |
set -e
python -m pytest pl_examples -v --maxfail=2 --durations=0
bash pl_examples/run_examples.sh --trainer.accelerator=gpu --trainer.devices=1
bash pl_examples/run_examples.sh --trainer.accelerator=gpu --trainer.devices=2 --trainer.strategy=ddp
bash pl_examples/run_examples.sh --trainer.accelerator=gpu --trainer.devices=2 --trainer.strategy=ddp --trainer.precision=16
bash run_ddp_examples.sh
bash run_pl_examples.sh --trainer.accelerator=gpu --trainer.devices=1
bash run_pl_examples.sh --trainer.accelerator=gpu --trainer.devices=2 --trainer.strategy=ddp
bash run_pl_examples.sh --trainer.accelerator=gpu --trainer.devices=2 --trainer.strategy=ddp --trainer.precision=16
workingDirectory: examples
env:
PL_USE_MOCKED_MNIST: "1"
displayName: 'Testing: examples'

View File

@ -49,7 +49,8 @@ jobs:
- bash: |
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
python "pl_examples/hpu_examples/simple_mnist/mnist.py"
python "pl_hpu/mnist_sample.py"
workingDirectory: examples
displayName: 'Testing: HPU examples'
- task: PublishTestResults@2

View File

@ -16,11 +16,11 @@ assignees: ''
Please reproduce using the BoringModel!
You can use the following Colab link:
https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report/bug_report_model.ipynb
https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/examples/pl_bug_report/bug_report_model.ipynb
IMPORTANT: has to be public.
or this simple template:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report/bug_report_model.py
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/pl_bug_report/bug_report_model.py
If you could not reproduce using the BoringModel and still think there's a bug, please post here
but remember, bugs with code are fixed faster!

View File

@ -127,13 +127,6 @@ jobs:
# NOTE: do not include coverage report here, see: https://github.com/nedbat/coveragepy/issues/1003
coverage run --source pytorch_lightning -m pytest tests -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-py${{ matrix.python-version }}-${{ matrix.requires }}-${{ matrix.release }}.xml
- name: Examples
run: |
# adjust versions according installed Torch version
python ./requirements/adjust-versions.py requirements/examples.txt
pip install -r requirements/examples.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade
python -m pytest pl_examples -v --durations=10
- name: Upload pytest results
uses: actions/upload-artifact@v2
with:
@ -142,6 +135,16 @@ jobs:
if-no-files-found: error
if: failure()
- name: Prepare Examples
run: |
# adjust versions according installed Torch version
python ./requirements/adjust-versions.py requirements/examples.txt
pip install -r requirements/examples.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade
- name: Run Examples
working-directory: ./examples
run: python -m pytest test_pl_examples.py -v --durations=10
- name: Statistics
if: success()
run: |

View File

@ -26,7 +26,7 @@ test: clean
pip install -r requirements/devel.txt
pip install -r requirements/strategies.txt
# run tests with coverage
python -m coverage run --source pytorch_lightning -m pytest pytorch_lightning tests pl_examples -v
python -m coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v
python -m coverage report
docs: clean

View File

@ -38,7 +38,7 @@ RUN \
fi && \
# save the examples
mv pytorch-lightning/_notebooks/.notebooks/ notebooks && \
mv pytorch-lightning/pl_examples . && \
mv pytorch-lightning/examples . && \
# Installations \
pip install -q fire && \

View File

@ -27,7 +27,7 @@ COPY ./ /home/pytorch-lightning/
RUN \
cd /home && \
mv pytorch-lightning/_notebooks notebooks && \
mv pytorch-lightning/pl_examples . && \
mv pytorch-lightning/examples . && \
# replace by specific version if asked
if [ ! -z "$LIGHTNING_VERSION" ] ; then \
rm -rf pytorch-lightning ; \

View File

@ -45,7 +45,7 @@ This enables advanced users to provide their own BF16 and FP32 operator list ins
accelerator="hpu",
devices=1,
# Optional Habana mixed precision params to be set
# Checkout `pl_examples/hpu_examples/simple_mnist/ops_bf16_mnist.txt` for the format
# Checkout `examples/pl_hpu/ops_bf16_mnist.txt` for the format
plugins=[
HPUPrecisionPlugin(
precision=16,

View File

@ -60,7 +60,7 @@ Known limitations
Currently there are some known limitations that are being addressed in the near future to make the experience seamless when moving from different devices.
Please see the `MNIST example <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/ipu_examples/mnist.py>`__ which displays most of the limitations and how to overcome them till they are resolved.
Please see the `MNIST example <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/pl_ipu/mnist_sample.py>`__ which displays most of the limitations and how to overcome them till they are resolved.
* ``self.log`` is not supported in the ``training_step``, ``validation_step``, ``test_step`` or ``predict_step``. This is due to the step function being traced and sent to the IPU devices. We're actively working on fixing this
* Multiple optimizers are not supported. ``training_step`` only supports returning one loss from the ``training_step`` function as a result

View File

@ -134,7 +134,7 @@ in a `HyperOptArgumentParser
Here is an example where you run a grid search of 9 combinations of hyperparameters.
See also the multi-node examples
`here <https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/basic_examples>`__.
`here <https://github.com/PyTorchLightning/pytorch-lightning/tree/master/examples/pl_basics>`__.
.. code-block:: python

View File

@ -93,7 +93,7 @@ If you believe this to be useful, please open a `feature request <https://github
What are the performance impacts?
*********************************
Fault-tolerant Training was tested on common and worst-case scenarios in order to measure the impact of the internal state tracking on the total training time.
On tiny models like the `BoringModel and RandomDataset <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report/bug_report_model.py>`_
On tiny models like the `BoringModel and RandomDataset <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/pl_bug_report/bug_report_model.py>`_
which has virtually no data loading and processing overhead, we noticed up to 50% longer training time with fault tolerance enabled.
In this worst-case scenario, fault-tolerant adds an overhead that is noticeable in comparison to the compute time for dataloading itself.
However, for more realistic training workloads where data loading and preprocessing is more expensive, the constant overhead that fault tolerance adds becomes less noticeable or not noticeable at all.

View File

@ -441,12 +441,12 @@ Advanced Examples
* - Link to Example
- Description
* - `K-fold Cross Validation <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/loop_examples/kfold.py>`_
* - `K-fold Cross Validation <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/pl_loops/kfold.py>`_
- `KFold / Cross Validation <https://en.wikipedia.org/wiki/Cross-validation_(statistics)>`__ is a machine learning practice in which the training dataset is being partitioned into ``num_folds`` complementary subsets.
One cross validation round will perform fitting where one fold is left out for validation and the other folds are used for training.
To reduce variability, once all rounds are performed using the different folds, the trained models are ensembled and their predictions are
averaged when estimating the model's predictive performance on the test dataset.
* - `Yielding Training Step <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/loop_examples/yielding_training_step.py>`_
* - `Yielding Training Step <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/pl_loops/yielding_training_step.py>`_
- This loop enables you to write the :meth:`~pytorch_lightning.core.module.LightningModule.training_step` hook
as a Python Generator for automatic optimization with multiple optimizers, i.e., you can :code:`yield` loss
values from it instead of returning them. This can enable more elegant and expressive implementations, as shown

View File

@ -123,7 +123,7 @@ Here are five required steps to convert to :class:`~pytorch_lightning.lite.Light
Lite(...).run(args)
That's all. You can now train on any kind of device and scale your training. Check out `this <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/mnist_examples/image_classifier_2_lite.py>`_ full MNIST training example with LightningLite.
That's all. You can now train on any kind of device and scale your training. Check out `this <https://github.com/PyTorchLightning/pytorch-lightning/blob/master/examples/convert_from_pt_to_pl/image_classifier_2_lite.py>`_ full MNIST training example with LightningLite.
:class:`~pytorch_lightning.lite.LightningLite` takes care of device management, so you don't have to.
You should remove any device-specific logic within your code.

53
examples/README.md Normal file
View File

@ -0,0 +1,53 @@
# Examples
Our most robust examples showing all sorts of implementations
can be found in our sister library [Lightning Bolts](https://pytorch-lightning.readthedocs.io/en/latest/ecosystem/bolts.html).
______________________________________________________________________
## MNIST Examples
5 MNIST examples showing how to gradually convert from pure PyTorch to PyTorch Lightning.
The transition through [LightningLite](https://pytorch-lightning.readthedocs.io/en/latest/starter/lightning_lite.html) from pure PyTorch is optional, but it might be helpful to learn about it.
- [MNIST with vanilla PyTorch](convert_from_pt_to_pl/image_classifier_1_pytorch.py)
- [MNIST with LightningLite](convert_from_pt_to_pl/image_classifier_2_lite.py)
- [MNIST LightningLite to LightningModule](convert_from_pt_to_pl/image_classifier_3_lite_to_lightning_module.py)
- [MNIST with LightningModule](convert_from_pt_to_pl/image_classifier_4_lightning_module.py)
- [MNIST with LightningModule + LightningDataModule](convert_from_pt_to_pl/image_classifier_5_lightning_datamodule.py)
______________________________________________________________________
## Basic Examples
In this folder, we have 2 simple examples:
- [Image Classifier](pl_basics/backbone_image_classifier.py) (trains arbitrary datasets with arbitrary backbones).
- [Image Classifier + DALI](convert_from_pt_to_pl/image_classifier_4_dali.py) (defines the model inside the `LightningModule`).
- [Autoencoder](pl_basics/autoencoder.py)
______________________________________________________________________
## Domain Examples
This folder contains older examples. You should instead use the examples
in [Lightning Bolts](https://pytorch-lightning.readthedocs.io/en/latest/ecosystem/bolts.html)
for advanced use cases.
______________________________________________________________________
## Basic Examples
In this folder, we have 1 simple example:
- [Image Classifier + DALI](pl_integrations/dali_image_classifier.py) (defines the model inside the `LightningModule`).
______________________________________________________________________
## Loop examples
Contains implementations leveraging [loop customization](https://pytorch-lightning.readthedocs.io/en/latest/extensions/loops.html) to enhance the Trainer with new optimization routines.
- [K-fold Cross Validation Loop](pl_loops/kfold.py): Implementation of cross validation in a loop and special datamodule.
- [Yield Loop](pl_loops/yielding_training_step.py): Enables yielding from the training_step like in a Python generator. Useful for automatic optimization with multiple optimizers.

View File

@ -12,44 +12,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from os import path
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as T
from torch.optim.lr_scheduler import StepLR
from pl_examples.basic_examples.mnist_datamodule import MNIST
# Credit to the PyTorch Team
# Taken from https://github.com/pytorch/examples/blob/master/mnist/main.py and slightly adapted.
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
def run(hparams):
@ -60,8 +36,8 @@ def run(hparams):
device = torch.device("cuda" if use_cuda else "cpu")
transform = T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
train_dataset = MNIST("./data", train=True, download=True, transform=transform)
test_dataset = MNIST("./data", train=False, transform=transform)
train_dataset = MNIST(DATASETS_PATH, train=True, download=True, transform=transform)
test_dataset = MNIST(DATASETS_PATH, train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=hparams.batch_size,

View File

@ -29,6 +29,7 @@ Learn more from the documentation: https://pytorch-lightning.readthedocs.io/en/l
"""
import argparse
from os import path
import torch
import torch.nn.functional as F
@ -37,11 +38,13 @@ import torchvision.transforms as T
from torch.optim.lr_scheduler import StepLR
from torchmetrics.classification import Accuracy
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_1_pytorch import Net
from pytorch_lightning import seed_everything
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.lite import LightningLite # import LightningLite
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class Lite(LightningLite):
def run(self, hparams):
@ -51,10 +54,10 @@ class Lite(LightningLite):
transform = T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
# This is meant to ensure the data are download only by 1 process.
if self.is_global_zero:
MNIST("./data", download=True)
MNIST(DATASETS_PATH, download=True)
self.barrier()
train_dataset = MNIST("./data", train=True, transform=transform)
test_dataset = MNIST("./data", train=False, transform=transform)
train_dataset = MNIST(DATASETS_PATH, train=True, transform=transform)
test_dataset = MNIST(DATASETS_PATH, train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=hparams.batch_size,

View File

@ -25,6 +25,7 @@ Learn more from the documentation: https://pytorch-lightning.readthedocs.io/en/l
"""
import argparse
from os import path
import torch
import torch.nn.functional as F
@ -33,11 +34,13 @@ import torchvision.transforms as T
from torch.optim.lr_scheduler import StepLR
from torchmetrics import Accuracy
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_1_pytorch import Net
from pytorch_lightning import seed_everything
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.lite import LightningLite
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class Lite(LightningLite):
"""Lite is starting to look like a LightningModule."""
@ -135,14 +138,14 @@ class Lite(LightningLite):
return T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
def prepare_data(self) -> None:
MNIST("./data", download=True)
MNIST(DATASETS_PATH, download=True)
def train_dataloader(self):
train_dataset = MNIST("./data", train=True, download=False, transform=self.transform)
train_dataset = MNIST(DATASETS_PATH, train=True, download=False, transform=self.transform)
return torch.utils.data.DataLoader(train_dataset, batch_size=self.hparams.batch_size)
def test_dataloader(self):
test_dataset = MNIST("./data", train=False, download=False, transform=self.transform)
test_dataset = MNIST(DATASETS_PATH, train=False, download=False, transform=self.transform)
return torch.utils.data.DataLoader(test_dataset, batch_size=self.hparams.batch_size)

View File

@ -15,17 +15,20 @@
To run: python image_classifier_4_lightning_module.py --trainer.max_epochs=50
"""
from os import path
import torch
import torchvision.transforms as T
from torch.nn import functional as F
from torchmetrics import Accuracy
from pl_examples import cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_1_pytorch import Net
from pytorch_lightning import LightningModule
from pytorch_lightning import cli_lightning_logo, LightningModule
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.utilities.cli import LightningCLI
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class ImageClassifier(LightningModule):
def __init__(self, model=None, lr=1.0, gamma=0.7, batch_size=32):
@ -62,14 +65,14 @@ class ImageClassifier(LightningModule):
return T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
def prepare_data(self) -> None:
MNIST("./data", download=True)
MNIST(DATASETS_PATH, download=True)
def train_dataloader(self):
train_dataset = MNIST("./data", train=True, download=False, transform=self.transform)
train_dataset = MNIST(DATASETS_PATH, train=True, download=False, transform=self.transform)
return torch.utils.data.DataLoader(train_dataset, batch_size=self.hparams.batch_size)
def test_dataloader(self):
test_dataset = MNIST("./data", train=False, download=False, transform=self.transform)
test_dataset = MNIST(DATASETS_PATH, train=False, download=False, transform=self.transform)
return torch.utils.data.DataLoader(test_dataset, batch_size=self.hparams.batch_size)

View File

@ -15,17 +15,20 @@
To run: python image_classifier_5_lightning_datamodule.py --trainer.max_epochs=50
"""
from os import path
import torch
import torchvision.transforms as T
from torch.nn import functional as F
from torchmetrics import Accuracy
from pl_examples import cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_1_pytorch import Net
from pytorch_lightning import LightningDataModule, LightningModule
from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.utilities.cli import LightningCLI
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class ImageClassifier(LightningModule):
def __init__(self, model, lr=1.0, gamma=0.7, batch_size=32):
@ -66,14 +69,14 @@ class MNISTDataModule(LightningDataModule):
return T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
def prepare_data(self) -> None:
MNIST("./data", download=True)
MNIST(DATASETS_PATH, download=True)
def train_dataloader(self):
train_dataset = MNIST("./data", train=True, download=False, transform=self.transform)
train_dataset = MNIST(DATASETS_PATH, train=True, download=False, transform=self.transform)
return torch.utils.data.DataLoader(train_dataset, batch_size=self.hparams.batch_size)
def test_dataloader(self):
test_dataset = MNIST("./data", train=False, download=False, transform=self.transform)
test_dataset = MNIST(DATASETS_PATH, train=False, download=False, transform=self.transform)
return torch.utils.data.DataLoader(test_dataset, batch_size=self.hparams.batch_size)

View File

@ -15,6 +15,7 @@
To run: python autoencoder.py --trainer.max_epochs=50
"""
from os import path
from typing import Optional, Tuple
import torch
@ -22,9 +23,8 @@ import torch.nn.functional as F
from torch import nn
from torch.utils.data import DataLoader, random_split
import pytorch_lightning as pl
from pl_examples import _DATASETS_PATH, cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pytorch_lightning import callbacks, cli_lightning_logo, LightningDataModule, LightningModule, Trainer
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.utilities.cli import LightningCLI
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
from pytorch_lightning.utilities.rank_zero import rank_zero_only
@ -34,8 +34,10 @@ if _TORCHVISION_AVAILABLE:
from torchvision import transforms
from torchvision.utils import save_image
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class ImageSampler(pl.callbacks.Callback):
class ImageSampler(callbacks.Callback):
def __init__(
self,
num_samples: int = 3,
@ -85,7 +87,7 @@ class ImageSampler(pl.callbacks.Callback):
)
@rank_zero_only
def on_train_epoch_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule) -> None:
def on_train_epoch_end(self, trainer: Trainer, pl_module: LightningModule) -> None:
if not _TORCHVISION_AVAILABLE:
return
@ -103,7 +105,7 @@ class ImageSampler(pl.callbacks.Callback):
save_image(self._to_grid(images_generated.reshape(images.shape)), f"grid_generated_{trainer.current_epoch}.png")
class LitAutoEncoder(pl.LightningModule):
class LitAutoEncoder(LightningModule):
"""
>>> LitAutoEncoder() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
LitAutoEncoder(
@ -150,11 +152,11 @@ class LitAutoEncoder(pl.LightningModule):
return loss
class MyDataModule(pl.LightningDataModule):
class MyDataModule(LightningDataModule):
def __init__(self, batch_size: int = 32):
super().__init__()
dataset = MNIST(_DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(_DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
dataset = MNIST(DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
self.mnist_train, self.mnist_val = random_split(dataset, [55000, 5000])
self.batch_size = batch_size

View File

@ -15,21 +15,23 @@
To run: python backbone_image_classifier.py --trainer.max_epochs=50
"""
from os import path
from typing import Optional
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
import pytorch_lightning as pl
from pl_examples import _DATASETS_PATH, cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.utilities.cli import LightningCLI
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
if _TORCHVISION_AVAILABLE:
from torchvision import transforms
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class Backbone(torch.nn.Module):
"""
@ -52,7 +54,7 @@ class Backbone(torch.nn.Module):
return x
class LitClassifier(pl.LightningModule):
class LitClassifier(LightningModule):
"""
>>> LitClassifier(Backbone()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
LitClassifier(
@ -100,11 +102,11 @@ class LitClassifier(pl.LightningModule):
return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
class MyDataModule(pl.LightningDataModule):
class MyDataModule(LightningDataModule):
def __init__(self, batch_size: int = 32):
super().__init__()
dataset = MNIST(_DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(_DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
dataset = MNIST(DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
self.mnist_train, self.mnist_val = random_split(dataset, [55000, 5000])
self.batch_size = batch_size

View File

@ -23,14 +23,14 @@ visualized in 2 ways:
"""
import sys
from os import path
import torch
import torchvision
import torchvision.models as models
import torchvision.transforms as T
from pl_examples import _DATASETS_PATH, cli_lightning_logo
from pytorch_lightning import LightningDataModule, LightningModule
from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.profiler.pytorch import PyTorchProfiler
from pytorch_lightning.utilities.cli import LightningCLI
@ -43,6 +43,7 @@ DEFAULT_CMD_LINE = (
"--trainer.accelerator=gpu",
f"--trainer.devices={int(torch.cuda.is_available())}",
)
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class ModelToProfile(LightningModule):
@ -93,13 +94,11 @@ class CIFAR10DataModule(LightningDataModule):
transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()])
def train_dataloader(self, *args, **kwargs):
trainset = torchvision.datasets.CIFAR10(
root=_DATASETS_PATH, train=True, download=True, transform=self.transform
)
trainset = torchvision.datasets.CIFAR10(root=DATASETS_PATH, train=True, download=True, transform=self.transform)
return torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=True, num_workers=0)
def val_dataloader(self, *args, **kwargs):
valset = torchvision.datasets.CIFAR10(root=_DATASETS_PATH, train=False, download=True, transform=self.transform)
valset = torchvision.datasets.CIFAR10(root=DATASETS_PATH, train=False, download=True, transform=self.transform)
return torch.utils.data.DataLoader(valset, batch_size=2, shuffle=True, num_workers=0)

View File

@ -54,9 +54,7 @@ from torchvision import models, transforms
from torchvision.datasets import ImageFolder
from torchvision.datasets.utils import download_and_extract_archive
import pytorch_lightning as pl
from pl_examples import cli_lightning_logo
from pytorch_lightning import LightningDataModule
from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.callbacks.finetuning import BaseFinetuning
from pytorch_lightning.utilities.cli import LightningCLI
from pytorch_lightning.utilities.rank_zero import rank_zero_info
@ -73,10 +71,10 @@ class MilestonesFinetuning(BaseFinetuning):
self.milestones = milestones
self.train_bn = train_bn
def freeze_before_training(self, pl_module: pl.LightningModule):
def freeze_before_training(self, pl_module: LightningModule):
self.freeze(modules=pl_module.feature_extractor, train_bn=self.train_bn)
def finetune_function(self, pl_module: pl.LightningModule, epoch: int, optimizer: Optimizer, opt_idx: int):
def finetune_function(self, pl_module: LightningModule, epoch: int, optimizer: Optimizer, opt_idx: int):
if epoch == self.milestones[0]:
# unfreeze 5 last layers
self.unfreeze_and_add_param_group(
@ -155,7 +153,7 @@ class CatDogImageDataModule(LightningDataModule):
# --- Pytorch-lightning module ---
class TransferLearningModel(pl.LightningModule):
class TransferLearningModel(LightningModule):
def __init__(
self,
backbone: str = "resnet50",

View File

@ -24,9 +24,9 @@ import torch
import torch.nn as nn
import torch.nn.functional as F
from pl_examples import cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNISTDataModule
from pytorch_lightning import cli_lightning_logo
from pytorch_lightning.core import LightningModule
from pytorch_lightning.demos.mnist_datamodule import MNISTDataModule
from pytorch_lightning.trainer import Trainer
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
@ -130,7 +130,7 @@ class GAN(LightningModule):
@staticmethod
def add_argparse_args(parent_parser: ArgumentParser, *, use_argument_group=True):
if use_argument_group:
parser = parent_parser.add_argument_group("pl.GAN")
parser = parent_parser.add_argument_group("GAN")
parser_out = parent_parser
else:
parser = ArgumentParser(parents=[parent_parser], add_help=False)

View File

@ -44,8 +44,7 @@ from torch.optim.optimizer import Optimizer
from torch.utils.data import DataLoader
from torch.utils.data.dataset import IterableDataset
import pytorch_lightning as pl
from pl_examples import cli_lightning_logo
from pytorch_lightning import cli_lightning_logo, LightningModule, seed_everything, Trainer
class DQN(nn.Module):
@ -212,7 +211,7 @@ class Agent:
return reward, done
class DQNLightning(pl.LightningModule):
class DQNLightning(LightningModule):
"""Basic DQN Model.
>>> DQNLightning(env="CartPole-v1") # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
@ -391,15 +390,14 @@ class DQNLightning(pl.LightningModule):
def main(args) -> None:
model = DQNLightning(**vars(args))
trainer = pl.Trainer(accelerator="gpu", devices=1, strategy="dp", val_check_interval=100)
trainer = Trainer(accelerator="gpu", devices=1, strategy="dp", val_check_interval=100)
trainer.fit(model)
if __name__ == "__main__":
cli_lightning_logo()
torch.manual_seed(0)
np.random.seed(0)
seed_everything(0)
parser = argparse.ArgumentParser(add_help=False)
parser = DQNLightning.add_model_specific_args(parser)

View File

@ -37,8 +37,7 @@ from torch.distributions import Categorical, Normal
from torch.optim.optimizer import Optimizer
from torch.utils.data import DataLoader, IterableDataset
import pytorch_lightning as pl
from pl_examples import cli_lightning_logo
from pytorch_lightning import cli_lightning_logo, LightningModule, seed_everything, Trainer
def create_mlp(input_shape: Tuple[int], n_actions: int, hidden_size: int = 128):
@ -140,7 +139,7 @@ class ExperienceSourceDataset(IterableDataset):
return iterator
class PPOLightning(pl.LightningModule):
class PPOLightning(LightningModule):
"""PyTorch Lightning implementation of PPO.
Example:
@ -459,16 +458,16 @@ class PPOLightning(pl.LightningModule):
def main(args) -> None:
model = PPOLightning(**vars(args))
trainer = pl.Trainer.from_argparse_args(args)
trainer = Trainer.from_argparse_args(args)
trainer.fit(model)
if __name__ == "__main__":
cli_lightning_logo()
pl.seed_everything(0)
seed_everything(0)
parent_parser = argparse.ArgumentParser(add_help=False)
parent_parser = pl.Trainer.add_argparse_args(parent_parser)
parent_parser = Trainer.add_argparse_args(parent_parser)
parser = PPOLightning.add_model_specific_args(parent_parser)
args = parser.parse_args()

View File

@ -21,11 +21,10 @@ import torch
import torch.nn.functional as F
import torchvision.transforms as transforms
from PIL import Image
from torch import nn
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
from pl_examples import cli_lightning_logo
from pl_examples.domain_templates.unet import UNet
from pytorch_lightning import cli_lightning_logo, LightningModule, Trainer
from pytorch_lightning.loggers import WandbLogger
DEFAULT_VOID_LABELS = (0, 1, 2, 3, 4, 5, 6, 9, 10, 14, 15, 16, 18, 29, 30, -1)
@ -68,8 +67,8 @@ class KITTI(Dataset):
encoded using `encode_segmap`, and given `transform` (if any) are applied to the image only
(mask does not usually require transforms, but they can be implemented in a similar way).
>>> from pl_examples import _DATASETS_PATH
>>> dataset_path = os.path.join(_DATASETS_PATH, "Kitti")
>>> from examples import DATASETS_PATH
>>> dataset_path = os.path.join(DATASETS_PATH, "Kitti")
>>> _create_synth_kitti_dataset(dataset_path, image_dims=(1024, 512))
>>> KITTI(dataset_path, 'train') # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
<...semantic_segmentation.KITTI object at ...>
@ -146,7 +145,148 @@ class KITTI(Dataset):
return files_list
class SegModel(pl.LightningModule):
class UNet(nn.Module):
"""Architecture based on U-Net: Convolutional Networks for Biomedical Image Segmentation.
Link - https://arxiv.org/abs/1505.04597
>>> UNet(num_classes=2, num_layers=3) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
UNet(
(layers): ModuleList(
(0): DoubleConv(...)
(1): Down(...)
(2): Down(...)
(3): Up(...)
(4): Up(...)
(5): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1))
)
)
"""
def __init__(self, num_classes: int = 19, num_layers: int = 5, features_start: int = 64, bilinear: bool = False):
"""
Args:
num_classes: Number of output classes required (default 19 for KITTI dataset)
num_layers: Number of layers in each side of U-net
features_start: Number of features in first layer
bilinear: Whether to use bilinear interpolation or transposed convolutions for upsampling.
"""
super().__init__()
self.num_layers = num_layers
layers = [DoubleConv(3, features_start)]
feats = features_start
for _ in range(num_layers - 1):
layers.append(Down(feats, feats * 2))
feats *= 2
for _ in range(num_layers - 1):
layers.append(Up(feats, feats // 2, bilinear))
feats //= 2
layers.append(nn.Conv2d(feats, num_classes, kernel_size=1))
self.layers = nn.ModuleList(layers)
def forward(self, x):
xi = [self.layers[0](x)]
# Down path
for layer in self.layers[1 : self.num_layers]:
xi.append(layer(xi[-1]))
# Up path
for i, layer in enumerate(self.layers[self.num_layers : -1]):
xi[-1] = layer(xi[-1], xi[-2 - i])
return self.layers[-1](xi[-1])
class DoubleConv(nn.Module):
"""Double Convolution and BN and ReLU (3x3 conv -> BN -> ReLU) ** 2.
>>> DoubleConv(4, 4) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
DoubleConv(
(net): Sequential(...)
)
"""
def __init__(self, in_ch: int, out_ch: int):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.net(x)
class Down(nn.Module):
"""Combination of MaxPool2d and DoubleConv in series.
>>> Down(4, 8) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Down(
(net): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(net): Sequential(...)
)
)
)
"""
def __init__(self, in_ch: int, out_ch: int):
super().__init__()
self.net = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2), DoubleConv(in_ch, out_ch))
def forward(self, x):
return self.net(x)
class Up(nn.Module):
"""Upsampling (by either bilinear interpolation or transpose convolutions) followed by concatenation of feature
map from contracting path, followed by double 3x3 convolution.
>>> Up(8, 4) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Up(
(upsample): ConvTranspose2d(8, 4, kernel_size=(2, 2), stride=(2, 2))
(conv): DoubleConv(
(net): Sequential(...)
)
)
"""
def __init__(self, in_ch: int, out_ch: int, bilinear: bool = False):
super().__init__()
self.upsample = None
if bilinear:
self.upsample = nn.Sequential(
nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True),
nn.Conv2d(in_ch, in_ch // 2, kernel_size=1),
)
else:
self.upsample = nn.ConvTranspose2d(in_ch, in_ch // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_ch, out_ch)
def forward(self, x1, x2):
x1 = self.upsample(x1)
# Pad x1 to the size of x2
diff_h = x2.shape[2] - x1.shape[2]
diff_w = x2.shape[3] - x1.shape[3]
x1 = F.pad(x1, [diff_w // 2, diff_w - diff_w // 2, diff_h // 2, diff_h - diff_h // 2])
# Concatenate along the channels axis
x = torch.cat([x2, x1], dim=1)
return self.conv(x)
class SegModel(LightningModule):
"""Semantic Segmentation Module.
This is a basic semantic segmentation module implemented with Lightning.
@ -158,8 +298,7 @@ class SegModel(pl.LightningModule):
Adam optimizer is used along with Cosine Annealing learning rate scheduler.
>>> from pl_examples import _DATASETS_PATH
>>> dataset_path = os.path.join(_DATASETS_PATH, "Kitti")
>>> dataset_path = os.path.join(".", "Kitti")
>>> _create_synth_kitti_dataset(dataset_path, image_dims=(1024, 512))
>>> SegModel(dataset_path) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
SegModel(
@ -277,7 +416,7 @@ def main(hparams: Namespace):
# ------------------------
# 3 INIT TRAINER
# ------------------------
trainer = pl.Trainer.from_argparse_args(hparams)
trainer = Trainer.from_argparse_args(hparams)
# ------------------------
# 5 START TRAINING

View File

@ -15,13 +15,13 @@ import torch
from jsonargparse import lazy_instance
from torch.nn import functional as F
import pytorch_lightning as pl
from pl_examples.basic_examples.mnist_datamodule import MNISTDataModule
from pytorch_lightning import LightningModule
from pytorch_lightning.demos.mnist_datamodule import MNISTDataModule
from pytorch_lightning.plugins import HPUPrecisionPlugin
from pytorch_lightning.utilities.cli import LightningCLI
class LitClassifier(pl.LightningModule):
class LitClassifier(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC
from os import path
from random import shuffle
from warnings import warn
@ -21,11 +22,10 @@ from packaging.version import Version
from torch.nn import functional as F
from torch.utils.data import random_split
import pytorch_lightning as pl
from pl_examples import _DALI_AVAILABLE, _DATASETS_PATH, cli_lightning_logo
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.utilities.cli import LightningCLI
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
from pytorch_lightning.utilities.imports import _DALI_AVAILABLE, _TORCHVISION_AVAILABLE
if _TORCHVISION_AVAILABLE:
from torchvision import transforms
@ -43,6 +43,8 @@ else:
warn("NVIDIA DALI is not available")
ops, Pipeline, DALIClassificationIterator, LastBatchPolicy = ..., ABC, ABC, ABC
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class ExternalMNISTInputIterator:
"""This iterator class wraps torchvision's MNIST dataset and returns the images and labels in batches."""
@ -120,7 +122,7 @@ class DALIClassificationLoader(DALIClassificationIterator):
return batch_count + last_batch
class LitClassifier(pl.LightningModule):
class LitClassifier(LightningModule):
def __init__(self, hidden_dim: int = 128, learning_rate: float = 0.0001):
super().__init__()
self.save_hyperparameters()
@ -159,11 +161,11 @@ class LitClassifier(pl.LightningModule):
return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
class MyDataModule(pl.LightningDataModule):
class MyDataModule(LightningDataModule):
def __init__(self, batch_size: int = 32):
super().__init__()
dataset = MNIST(_DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(_DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
dataset = MNIST(DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor())
self.mnist_train, self.mnist_val = random_split(dataset, [55000, 5000])
eii_train = ExternalMNISTInputIterator(self.mnist_train, batch_size)

View File

@ -15,11 +15,11 @@
import torch
from torch.nn import functional as F
import pytorch_lightning as pl
from pl_examples.basic_examples.mnist_datamodule import MNISTDataModule
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.demos.mnist_datamodule import MNISTDataModule
class LitClassifier(pl.LightningModule):
class LitClassifier(LightningModule):
def __init__(self, hidden_dim: int = 128, learning_rate: float = 0.0001):
super().__init__()
self.save_hyperparameters()
@ -78,7 +78,7 @@ if __name__ == "__main__":
model = LitClassifier()
trainer = pl.Trainer(max_epochs=2, accelerator="ipu", devices=8)
trainer = Trainer(max_epochs=2, accelerator="ipu", devices=8)
trainer.fit(model, datamodule=dm)
trainer.test(model, datamodule=dm)

View File

@ -15,6 +15,7 @@ import os.path as osp
from abc import ABC, abstractmethod
from copy import deepcopy
from dataclasses import dataclass
from os import path
from typing import Any, Dict, List, Optional, Type
import torch
@ -26,15 +27,16 @@ from torch.utils.data.dataloader import DataLoader
from torch.utils.data.dataset import Dataset, Subset
from torchmetrics.classification.accuracy import Accuracy
from pl_examples import _DATASETS_PATH
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_4_lightning_module import ImageClassifier
from pytorch_lightning import LightningDataModule, seed_everything, Trainer
from pytorch_lightning.core.module import LightningModule
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.loops.fit_loop import FitLoop
from pytorch_lightning.loops.loop import Loop
from pytorch_lightning.trainer.states import TrainerFn
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
#############################################################################################
# KFold Loop / Cross Validation Example #
# This example demonstrates how to leverage Lightning Loop Customization introduced in v1.5 #
@ -79,11 +81,11 @@ class MNISTKFoldDataModule(BaseKFoldDataModule):
def prepare_data(self) -> None:
# download the data.
MNIST(_DATASETS_PATH, transform=T.Compose([T.ToTensor(), T.Normalize(mean=(0.5,), std=(0.5,))]))
MNIST(DATASETS_PATH, transform=T.Compose([T.ToTensor(), T.Normalize(mean=(0.5,), std=(0.5,))]))
def setup(self, stage: Optional[str] = None) -> None:
# load the data
dataset = MNIST(_DATASETS_PATH, transform=T.Compose([T.ToTensor(), T.Normalize(mean=(0.5,), std=(0.5,))]))
dataset = MNIST(DATASETS_PATH, transform=T.Compose([T.ToTensor(), T.Normalize(mean=(0.5,), std=(0.5,))]))
self.train_dataset, self.test_dataset = random_split(dataset, [50000, 10000])
def setup_folds(self, num_folds: int) -> None:
@ -243,11 +245,23 @@ class KFoldLoop(Loop):
self.__dict__.update(state)
class LitImageClassifier(ImageClassifier):
def __init__(self) -> None:
class LitImageClassifier(LightningModule):
def __init__(self, model, lr=1.0, gamma=0.7, batch_size=32):
super().__init__()
self.save_hyperparameters(ignore="model")
self.model = model or Net()
self.test_acc = Accuracy()
self.val_acc = Accuracy()
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self.forward(x)
loss = F.nll_loss(logits, y.long())
return loss
def validation_step(self, batch: Any, batch_idx: int) -> None:
x, y = batch
logits = self.forward(x)
@ -256,6 +270,18 @@ class LitImageClassifier(ImageClassifier):
self.log("val_acc", self.val_acc)
self.log("val_loss", loss)
def test_step(self, batch, batch_idx):
x, y = batch
logits = self.forward(x)
loss = F.nll_loss(logits, y.long())
self.test_acc(logits, y)
self.log("test_acc", self.test_acc)
self.log("test_loss", loss)
def configure_optimizers(self):
optimizer = torch.optim.Adadelta(self.model.parameters(), lr=self.hparams.lr)
return [optimizer], [torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=self.hparams.gamma)]
#############################################################################################
# Step 5 / 5: Connect the KFoldLoop to the Trainer #

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from os import path
from typing import Any
import torch
@ -21,12 +22,14 @@ import torchvision.transforms as T
from torch.optim.lr_scheduler import StepLR
from torchmetrics import Accuracy
from pl_examples.basic_examples.mnist_datamodule import MNIST
from pl_examples.basic_examples.mnist_examples.image_classifier_1_pytorch import Net
from pytorch_lightning import seed_everything
from pytorch_lightning.demos.boring_classes import Net
from pytorch_lightning.demos.mnist_datamodule import MNIST
from pytorch_lightning.lite import LightningLite
from pytorch_lightning.loops import Loop
DATASETS_PATH = path.join(path.dirname(__file__), "..", "..", "Datasets")
class TrainLoop(Loop):
def __init__(self, lite, args, model, optimizer, scheduler, dataloader):
@ -139,10 +142,10 @@ class Lite(LightningLite):
def run(self, hparams):
transform = T.Compose([T.ToTensor(), T.Normalize((0.1307,), (0.3081,))])
if self.is_global_zero:
MNIST("./data", download=True)
MNIST(DATASETS_PATH, download=True)
self.barrier()
train_dataset = MNIST("./data", train=True, transform=transform)
test_dataset = MNIST("./data", train=False, transform=transform)
train_dataset = MNIST(DATASETS_PATH, train=True, transform=transform)
test_dataset = MNIST(DATASETS_PATH, train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, hparams.batch_size)
test_loader = torch.utils.data.DataLoader(test_dataset, hparams.test_batch_size)

View File

@ -12,17 +12,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
from argparse import ArgumentParser
from functools import partial
from typing import Generator
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from pl_examples.domain_templates.generative_adversarial_net import GAN as GANTemplate
from pl_examples.domain_templates.generative_adversarial_net import MNISTDataModule
from pytorch_lightning import Trainer
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.demos.mnist_datamodule import MNISTDataModule
from pytorch_lightning.loops import OptimizerLoop
from pytorch_lightning.loops.optimization.optimizer_loop import ClosureResult
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
if _TORCHVISION_AVAILABLE:
import torchvision
#############################################################################################
# Yield Loop #
@ -108,7 +114,119 @@ class YieldLoop(OptimizerLoop):
#############################################################################################
class GAN(GANTemplate):
class Generator(nn.Module):
"""
>>> Generator(img_shape=(1, 8, 8)) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Generator(
(model): Sequential(...)
)
"""
def __init__(self, latent_dim: int = 100, img_shape: tuple = (1, 28, 28)):
super().__init__()
self.img_shape = img_shape
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh(),
)
def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), *self.img_shape)
return img
class Discriminator(nn.Module):
"""
>>> Discriminator(img_shape=(1, 28, 28)) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Discriminator(
(model): Sequential(...)
)
"""
def __init__(self, img_shape):
super().__init__()
self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
)
def forward(self, img):
img_flat = img.view(img.size(0), -1)
validity = self.model(img_flat)
return validity
class GAN(LightningModule):
"""
>>> GAN(img_shape=(1, 8, 8)) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
GAN(
(generator): Generator(
(model): Sequential(...)
)
(discriminator): Discriminator(
(model): Sequential(...)
)
)
"""
def __init__(
self,
img_shape: tuple = (1, 28, 28),
lr: float = 0.0002,
b1: float = 0.5,
b2: float = 0.999,
latent_dim: int = 100,
):
super().__init__()
self.save_hyperparameters()
# networks
self.generator = Generator(latent_dim=self.hparams.latent_dim, img_shape=img_shape)
self.discriminator = Discriminator(img_shape=img_shape)
self.validation_z = torch.randn(8, self.hparams.latent_dim)
self.example_input_array = torch.zeros(2, self.hparams.latent_dim)
@staticmethod
def add_argparse_args(parent_parser: ArgumentParser, *, use_argument_group=True):
if use_argument_group:
parser = parent_parser.add_argument_group("GAN")
parser_out = parent_parser
else:
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser_out = parser
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5, help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999, help="adam: decay of second order momentum of gradient")
parser.add_argument("--latent_dim", type=int, default=100, help="dimensionality of the latent space")
return parser_out
def forward(self, z):
return self.generator(z)
@staticmethod
def adversarial_loss(y_hat, y):
return F.binary_cross_entropy_with_logits(y_hat, y)
# This training_step method is now a Python generator
def training_step(self, batch, batch_idx, optimizer_idx=0) -> Generator:
@ -145,6 +263,24 @@ class GAN(GANTemplate):
yield d_loss
def configure_optimizers(self):
lr = self.hparams.lr
b1 = self.hparams.b1
b2 = self.hparams.b2
opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr, betas=(b1, b2))
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr, betas=(b1, b2))
return [opt_g, opt_d], []
def on_train_epoch_end(self):
z = self.validation_z.type_as(self.generator.model[0].weight)
# log sampled images
sample_imgs = self(z)
grid = torchvision.utils.make_grid(sample_imgs)
for logger in self.loggers:
logger.experiment.add_image("generated_images", grid, self.current_epoch)
#############################################################################################
# Step 3 / 3: Connect the loop to the Trainer #

View File

@ -0,0 +1,21 @@
#!/bin/bash
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# test that a user can manually launch individual processes
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
args="--trainer.gpus 2 --trainer.strategy ddp --trainer.max_epochs=1 --trainer.limit_train_batches=1 --trainer.limit_val_batches=1 --trainer.limit_test_batches=1"
MASTER_ADDR="localhost" MASTER_PORT=1234 LOCAL_RANK=1 python convert_from_pt_to_pl/image_classifier_5_lightning_datamodule.py ${args} &
MASTER_ADDR="localhost" MASTER_PORT=1234 LOCAL_RANK=0 python convert_from_pt_to_pl/image_classifier_5_lightning_datamodule.py ${args}

View File

@ -0,0 +1,39 @@
#!/bin/bash
set -ex
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
dir_path=$(dirname "${BASH_SOURCE[0]}")
args="
--data.batch_size=32
--trainer.max_epochs=1
--trainer.limit_train_batches=2
--trainer.limit_val_batches=2
--trainer.limit_test_batches=2
--trainer.limit_predict_batches=2
--optimizer=Adam
"
python "${dir_path}/pl_basics/backbone_image_classifier.py" ${args} "$@"
python "${dir_path}/pl_basics/autoencoder.py" ${args} "$@"
args="--dry-run"
python "${dir_path}/convert_from_pt_to_pl/image_classifier_1_pytorch.py" ${args}
python "${dir_path}/convert_from_pt_to_pl/image_classifier_2_lite.py" ${args}
python "${dir_path}/convert_from_pt_to_pl/image_classifier_3_lite_to_lightning_module.py" ${args}
python "${dir_path}/pl_loops/mnist_lite.py" ${args}
args="
--trainer.max_epochs=1
--trainer.limit_train_batches=2
--trainer.limit_val_batches=2
--trainer.limit_test_batches=2
--trainer.limit_predict_batches=2
--optimizer=Adam
"
python "${dir_path}/convert_from_pt_to_pl/image_classifier_4_lightning_module.py" ${args} "$@"
python "${dir_path}/convert_from_pt_to_pl/image_classifier_5_lightning_datamodule.py" ${args} "$@"
python "${dir_path}/pl_fault_tolerant/automatic.py"

View File

@ -16,8 +16,7 @@ from unittest import mock
import pytest
import torch
from pl_examples import _DALI_AVAILABLE
from pytorch_lightning.utilities.imports import _IS_WINDOWS
from pytorch_lightning.utilities.imports import _DALI_AVAILABLE, _IS_WINDOWS
ARGS_DEFAULT = (
"--trainer.default_root_dir %(tmpdir)s "
@ -36,7 +35,7 @@ ARGS_GPU = ARGS_DEFAULT + "--trainer.accelerator gpu --trainer.devices 1 "
@pytest.mark.skipif(_IS_WINDOWS, reason="Not supported on Windows")
@pytest.mark.parametrize("cli_args", [ARGS_GPU])
def test_examples_mnist_dali(tmpdir, cli_args):
from pl_examples.integration_examples.dali_image_classifier import cli_main
from examples.pl_integrations.dali_image_classifier import cli_main
# update the temp dir
cli_args = cli_args % {"tmpdir": tmpdir}

View File

@ -1,53 +0,0 @@
# Examples
Our most robust examples showing all sorts of implementations
can be found in our sister library [Lightning Bolts](https://pytorch-lightning.readthedocs.io/en/latest/ecosystem/bolts.html).
______________________________________________________________________
## MNIST Examples
5 MNIST examples showing how to gradually convert from pure PyTorch to PyTorch Lightning.
The transition through [LightningLite](https://pytorch-lightning.readthedocs.io/en/latest/starter/lightning_lite.html) from pure PyTorch is optional but it might be helpful to learn about it.
- [MNIST with vanilla PyTorch](./basic_examples/mnist_examples/image_classifier_1_pytorch.py)
- [MNIST with LightningLite](./basic_examples/mnist_examples/image_classifier_2_lite.py)
- [MNIST LightningLite to LightningModule](./basic_examples/mnist_examples/image_classifier_3_lite_to_lightning_module.py)
- [MNIST with LightningModule](./basic_examples/mnist_examples/image_classifier_4_lightning_module.py)
- [MNIST with LightningModule + LightningDataModule](./basic_examples/mnist_examples/image_classifier_5_lightning_datamodule.py)
______________________________________________________________________
## Basic Examples
In this folder, we have 2 simple examples:
- [Image Classifier](./basic_examples/backbone_image_classifier.py) (trains arbitrary datasets with arbitrary backbones).
- [Image Classifier + DALI](./basic_examples/mnist_examples/image_classifier_4_dali.py) (defines the model inside the `LightningModule`).
- [Autoencoder](./basic_examples/autoencoder.py)
______________________________________________________________________
## Domain Examples
This folder contains older examples. You should instead use the examples
in [Lightning Bolts](https://pytorch-lightning.readthedocs.io/en/latest/ecosystem/bolts.html)
for advanced use cases.
______________________________________________________________________
## Basic Examples
In this folder, we have 1 simple example:
- [Image Classifier + DALI](./integration_examples/dali_image_classifier.py) (defines the model inside the `LightningModule`).
______________________________________________________________________
## Loop examples
Contains implementations leveraging [loop customization](https://pytorch-lightning.readthedocs.io/en/latest/extensions/loops.html) to enhance the Trainer with new optimization routines.
- [K-fold Cross Validation Loop](./loop_examples/kfold.py): Implementation of cross validation in a loop and special datamodule.
- [Yield Loop](./loop_examples/yielding_training_step.py): Enables yielding from the training_step like in a Python generator. Useful for automatic optimization with multiple optimizers.

View File

@ -1,52 +0,0 @@
import os
from six.moves import urllib
from pytorch_lightning.utilities import _module_available
# TorchVision hotfix https://github.com/pytorch/vision/issues/1938
opener = urllib.request.build_opener()
opener.addheaders = [("User-agent", "Mozilla/5.0")]
urllib.request.install_opener(opener)
_EXAMPLES_ROOT = os.path.dirname(__file__)
_PACKAGE_ROOT = os.path.dirname(_EXAMPLES_ROOT)
_DATASETS_PATH = os.path.join(_PACKAGE_ROOT, "Datasets")
_DALI_AVAILABLE = _module_available("nvidia.dali")
LIGHTNING_LOGO = """
####
###########
####################
############################
#####################################
##############################################
######################### ###################
####################### ###################
#################### ####################
################## #####################
################ ######################
##################### #################
###################### ###################
##################### #####################
#################### #######################
################### #########################
##############################################
#####################################
############################
####################
##########
####
"""
def nice_print(msg, last=False):
print()
print("\033[0;35m" + msg + "\033[0m")
if last:
print()
def cli_lightning_logo():
nice_print(LIGHTNING_LOGO)

View File

@ -1,158 +0,0 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.nn as nn
import torch.nn.functional as F
class UNet(nn.Module):
"""Architecture based on U-Net: Convolutional Networks for Biomedical Image Segmentation.
Link - https://arxiv.org/abs/1505.04597
>>> UNet(num_classes=2, num_layers=3) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
UNet(
(layers): ModuleList(
(0): DoubleConv(...)
(1): Down(...)
(2): Down(...)
(3): Up(...)
(4): Up(...)
(5): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1))
)
)
"""
def __init__(self, num_classes: int = 19, num_layers: int = 5, features_start: int = 64, bilinear: bool = False):
"""
Args:
num_classes: Number of output classes required (default 19 for KITTI dataset)
num_layers: Number of layers in each side of U-net
features_start: Number of features in first layer
bilinear: Whether to use bilinear interpolation or transposed convolutions for upsampling.
"""
super().__init__()
self.num_layers = num_layers
layers = [DoubleConv(3, features_start)]
feats = features_start
for _ in range(num_layers - 1):
layers.append(Down(feats, feats * 2))
feats *= 2
for _ in range(num_layers - 1):
layers.append(Up(feats, feats // 2, bilinear))
feats //= 2
layers.append(nn.Conv2d(feats, num_classes, kernel_size=1))
self.layers = nn.ModuleList(layers)
def forward(self, x):
xi = [self.layers[0](x)]
# Down path
for layer in self.layers[1 : self.num_layers]:
xi.append(layer(xi[-1]))
# Up path
for i, layer in enumerate(self.layers[self.num_layers : -1]):
xi[-1] = layer(xi[-1], xi[-2 - i])
return self.layers[-1](xi[-1])
class DoubleConv(nn.Module):
"""Double Convolution and BN and ReLU (3x3 conv -> BN -> ReLU) ** 2.
>>> DoubleConv(4, 4) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
DoubleConv(
(net): Sequential(...)
)
"""
def __init__(self, in_ch: int, out_ch: int):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.net(x)
class Down(nn.Module):
"""Combination of MaxPool2d and DoubleConv in series.
>>> Down(4, 8) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Down(
(net): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(net): Sequential(...)
)
)
)
"""
def __init__(self, in_ch: int, out_ch: int):
super().__init__()
self.net = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2), DoubleConv(in_ch, out_ch))
def forward(self, x):
return self.net(x)
class Up(nn.Module):
"""Upsampling (by either bilinear interpolation or transpose convolutions) followed by concatenation of feature
map from contracting path, followed by double 3x3 convolution.
>>> Up(8, 4) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Up(
(upsample): ConvTranspose2d(8, 4, kernel_size=(2, 2), stride=(2, 2))
(conv): DoubleConv(
(net): Sequential(...)
)
)
"""
def __init__(self, in_ch: int, out_ch: int, bilinear: bool = False):
super().__init__()
self.upsample = None
if bilinear:
self.upsample = nn.Sequential(
nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True),
nn.Conv2d(in_ch, in_ch // 2, kernel_size=1),
)
else:
self.upsample = nn.ConvTranspose2d(in_ch, in_ch // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_ch, out_ch)
def forward(self, x1, x2):
x1 = self.upsample(x1)
# Pad x1 to the size of x2
diff_h = x2.shape[2] - x1.shape[2]
diff_w = x2.shape[3] - x1.shape[3]
x1 = F.pad(x1, [diff_w // 2, diff_w - diff_w // 2, diff_h // 2, diff_h - diff_h // 2])
# Concatenate along the channels axis
x = torch.cat([x2, x1], dim=1)
return self.conv(x)

View File

@ -1,39 +0,0 @@
#!/bin/bash
set -ex
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
dir_path=$(dirname "${BASH_SOURCE[0]}")
args="
--data.batch_size=32
--trainer.max_epochs=1
--trainer.limit_train_batches=2
--trainer.limit_val_batches=2
--trainer.limit_test_batches=2
--trainer.limit_predict_batches=2
--optimizer=Adam
"
python "${dir_path}/basic_examples/backbone_image_classifier.py" ${args} "$@"
python "${dir_path}/basic_examples/autoencoder.py" ${args} "$@"
args="--dry-run"
python "${dir_path}/basic_examples/mnist_examples/image_classifier_1_pytorch.py" ${args}
python "${dir_path}/basic_examples/mnist_examples/image_classifier_2_lite.py" ${args}
python "${dir_path}/basic_examples/mnist_examples/image_classifier_3_lite_to_lightning_module.py" ${args}
python "${dir_path}/loop_examples/mnist_lite.py" ${args}
args="
--trainer.max_epochs=1
--trainer.limit_train_batches=2
--trainer.limit_val_batches=2
--trainer.limit_test_batches=2
--trainer.limit_predict_batches=2
--optimizer=Adam
"
python "${dir_path}/basic_examples/mnist_examples/image_classifier_4_lightning_module.py" ${args} "$@"
python "${dir_path}/basic_examples/mnist_examples/image_classifier_5_lightning_datamodule.py" ${args} "$@"
python "${dir_path}/fault_tolerant/automatic.py"

View File

@ -53,6 +53,7 @@ module = [
"pytorch_lightning.core.mixins.device_dtype_mixin",
"pytorch_lightning.core.saving",
"pytorch_lightning.demos.boring_classes",
"pytorch_lightning.demos.mnist_datamodule",
"pytorch_lightning.distributed.dist",
"pytorch_lightning.loggers.base",
"pytorch_lightning.loggers.logger",

View File

@ -0,0 +1,6 @@
from six.moves import urllib
# TorchVision hotfix https://github.com/pytorch/vision/issues/1938
opener = urllib.request.build_opener()
opener.addheaders = [("User-agent", "Mozilla/5.0")]
urllib.request.install_opener(opener)

View File

@ -36,3 +36,34 @@ __all__ = ["Trainer", "LightningDataModule", "LightningModule", "Callback", "see
# for compatibility with namespace packages
__import__("pkg_resources").declare_namespace(__name__)
LIGHTNING_LOGO: str = """
####
###########
####################
############################
#####################################
##############################################
######################### ###################
####################### ###################
#################### ####################
################## #####################
################ ######################
##################### #################
###################### ###################
##################### #####################
#################### #######################
################### #########################
##############################################
#####################################
############################
####################
##########
####
"""
def cli_lightning_logo() -> None:
print()
print("\033[0;35m" + LIGHTNING_LOGO + "\033[0m")
print()

View File

@ -14,6 +14,8 @@
from typing import Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset, IterableDataset, Subset
from pytorch_lightning import LightningDataModule, LightningModule
@ -211,3 +213,29 @@ class DemoModel(LightningModule):
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.learning_rate)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output

View File

@ -17,23 +17,25 @@ import platform
import random
import time
import urllib
from typing import Optional, Tuple
from typing import Any, Callable, Optional, Tuple
from urllib.error import HTTPError
from warnings import warn
import torch
from torch import Tensor
from torch.utils.data import DataLoader, Dataset, random_split
from pl_examples import _DATASETS_PATH
from pytorch_lightning import LightningDataModule
from pytorch_lightning.utilities.imports import _TORCHVISION_AVAILABLE
if _TORCHVISION_AVAILABLE:
from torchvision import transforms as transform_lib
_DATASETS_PATH = "./data"
class _MNIST(Dataset):
"""Carbon copy of ``tests.helpers.datasets.MNIST``.
"""Carbon copy of ``unittests_pl.helpers.datasets.MNIST``.
We cannot import the tests as they are not distributed with the package.
See https://github.com/PyTorchLightning/pytorch-lightning/pull/7614#discussion_r671183652 for more context.
@ -49,8 +51,8 @@ class _MNIST(Dataset):
cache_folder_name = "complete"
def __init__(
self, root: str, train: bool = True, normalize: tuple = (0.1307, 0.3081), download: bool = True, **kwargs
):
self, root: str, train: bool = True, normalize: tuple = (0.1307, 0.3081), download: bool = True, **kwargs: Any
) -> None:
super().__init__()
self.root = root
self.train = train # training set or test set
@ -61,7 +63,7 @@ class _MNIST(Dataset):
data_file = self.TRAIN_FILE_NAME if self.train else self.TEST_FILE_NAME
self.data, self.targets = self._try_load(os.path.join(self.cached_folder_path, data_file))
def __getitem__(self, idx: int) -> Tuple[torch.Tensor, int]:
def __getitem__(self, idx: int) -> Tuple[Tensor, int]:
img = self.data[idx].float().unsqueeze(0)
target = int(self.targets[idx])
@ -83,7 +85,7 @@ class _MNIST(Dataset):
existing = existing and os.path.isfile(os.path.join(data_folder, fname))
return existing
def prepare_data(self, download: bool = True):
def prepare_data(self, download: bool = True) -> None:
if download and not self._check_exists(self.cached_folder_path):
self._download(self.cached_folder_path)
if not self._check_exists(self.cached_folder_path):
@ -97,7 +99,7 @@ class _MNIST(Dataset):
urllib.request.urlretrieve(url, fpath)
@staticmethod
def _try_load(path_data, trials: int = 30, delta: float = 1.0):
def _try_load(path_data: str, trials: int = 30, delta: float = 1.0) -> Tuple[Tensor, Tensor]:
"""Resolving loading from the same time from multiple concurrent processes."""
res, exception = None, None
assert trials, "at least some trial has to be set"
@ -106,8 +108,8 @@ class _MNIST(Dataset):
try:
res = torch.load(path_data)
# todo: specify the possible exception
except Exception as e:
exception = e
except Exception as ex:
exception = ex
time.sleep(delta * random.random())
else:
break
@ -117,13 +119,13 @@ class _MNIST(Dataset):
return res
@staticmethod
def normalize_tensor(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) -> torch.Tensor:
def normalize_tensor(tensor: Tensor, mean: float = 0.0, std: float = 1.0) -> Tensor:
mean = torch.as_tensor(mean, dtype=tensor.dtype, device=tensor.device)
std = torch.as_tensor(std, dtype=tensor.dtype, device=tensor.device)
return tensor.sub(mean).div(std)
def MNIST(*args, **kwargs):
def MNIST(*args: Any, **kwargs: Any) -> Dataset:
torchvision_mnist_available = not bool(os.getenv("PL_USE_MOCKED_MNIST", False))
if torchvision_mnist_available:
try:
@ -156,9 +158,9 @@ class MNISTDataModule(LightningDataModule):
normalize: bool = False,
seed: int = 42,
batch_size: int = 32,
*args,
**kwargs,
):
*args: Any,
**kwargs: Any,
) -> None:
"""
Args:
data_dir: where to save/load the data
@ -187,22 +189,22 @@ class MNISTDataModule(LightningDataModule):
self.dataset_val = ...
@property
def num_classes(self):
def num_classes(self) -> int:
return 10
def prepare_data(self):
def prepare_data(self) -> None:
"""Saves MNIST files to `data_dir`"""
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage: Optional[str] = None):
def setup(self, stage: Optional[str] = None) -> None:
"""Split the train and valid dataset."""
extra = dict(transform=self.default_transforms) if self.default_transforms else {}
dataset = MNIST(self.data_dir, train=True, download=False, **extra)
dataset: Dataset = MNIST(self.data_dir, train=True, download=False, **extra)
train_length = len(dataset)
self.dataset_train, self.dataset_val = random_split(dataset, [train_length - self.val_split, self.val_split])
def train_dataloader(self):
def train_dataloader(self) -> DataLoader:
"""MNIST train set removes a subset to use for validation."""
loader = DataLoader(
self.dataset_train,
@ -214,7 +216,7 @@ class MNISTDataModule(LightningDataModule):
)
return loader
def val_dataloader(self):
def val_dataloader(self) -> DataLoader:
"""MNIST val set uses a subset of the training set for validation."""
loader = DataLoader(
self.dataset_val,
@ -226,7 +228,7 @@ class MNISTDataModule(LightningDataModule):
)
return loader
def test_dataloader(self):
def test_dataloader(self) -> DataLoader:
"""MNIST test set uses the test split."""
extra = dict(transform=self.default_transforms) if self.default_transforms else {}
dataset = MNIST(self.data_dir, train=False, download=False, **extra)
@ -241,7 +243,7 @@ class MNISTDataModule(LightningDataModule):
return loader
@property
def default_transforms(self):
def default_transforms(self) -> Optional[Callable]:
if not _TORCHVISION_AVAILABLE:
return None
if self.normalize:

View File

@ -132,6 +132,7 @@ _TORCH_GREATER_EQUAL_1_12 = _compare_version("torch", operator.ge, "1.12.0", use
_APEX_AVAILABLE = _module_available("apex.amp")
_BAGUA_AVAILABLE = _package_available("bagua")
_DALI_AVAILABLE = _module_available("nvidia.dali")
_DEEPSPEED_AVAILABLE = _package_available("deepspeed")
_DEEPSPEED_GREATER_EQUAL_0_5_9 = _DEEPSPEED_AVAILABLE and _compare_version("deepspeed", operator.ge, "0.5.9")
_DEEPSPEED_GREATER_EQUAL_0_6 = _DEEPSPEED_AVAILABLE and _compare_version("deepspeed", operator.ge, "0.6.0")

View File

@ -41,7 +41,7 @@ This is useful if you do not test against all required dependency versions.
**Docker:** Another option is to utilize the [pytorch lightning cuda base docker image](https://hub.docker.com/repository/docker/pytorchlightning/pytorch_lightning/tags?page=1&name=cuda). You can then run:
```bash
python -m pytest pytorch_lightning tests pl_examples -v
python -m pytest src/pytorch_lightning test/unitests_pl -v
```
You can also run a single test as follows: