Update data docs (#16839)

Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
This commit is contained in:
Carlos Mocholí 2023-03-14 18:44:49 +01:00 committed by GitHub
parent 67b94ef124
commit 281efc682f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 237 additions and 410 deletions

View File

@ -0,0 +1,43 @@
:orphan:
Accessing DataLoaders
=====================
In the case that you require access to the :class:`torch.utils.data.DataLoader` or :class:`torch.utils.data.Dataset` objects, DataLoaders for each step can be accessed
via the trainer properties :meth:`~lightning.pytorch.trainer.trainer.Trainer.train_dataloader`,
:meth:`~lightning.pytorch.trainer.trainer.Trainer.val_dataloaders`,
:meth:`~lightning.pytorch.trainer.trainer.Trainer.test_dataloaders`, and
:meth:`~lightning.pytorch.trainer.trainer.Trainer.predict_dataloaders`.
.. code-block:: python
dataloaders = trainer.train_dataloader
dataloaders = trainer.val_dataloaders
dataloaders = trainer.test_dataloaders
dataloaders = trainer.predict_dataloaders
These properties will match exactly what was returned in your ``*_dataloader`` hooks or passed to the ``Trainer``,
meaning that if you returned a dictionary of dataloaders, these will return a dictionary of dataloaders.
Replacing DataLoaders
---------------------
If you are using a :class:`~lightning.pytorch.utilities.CombinedLoader`. A flattened list of DataLoaders can be accessed by doing:
.. code-block:: python
from lightning.pytorch.utilities import CombinedLoader
iterables = {"dl1": dl1, "dl2": dl2}
combined_loader = CombinedLoader(iterables)
# access the original iterables
assert combined_loader.iterables is iterables
# the `.flattened` property can be convenient
assert combined_loader.flattened == [dl1, dl2]
# for example, to do a simple loop
updated = []
for dl in combined_loader.flattened:
new_dl = apply_some_transformation_to(dl)
updated.append(new_dl)
# it also allows you to easily replace the dataloaders
combined_loader.flattened = updated

View File

@ -1,16 +1,17 @@
:orphan:
.. _dataiters:
##################################
Injecting 3rd Party Data Iterables
##################################
Using 3rd Party Data Iterables
==============================
When training a model on a specific task, data loading and preprocessing might become a bottleneck.
Lightning does not enforce a specific data loading approach nor does it try to control it.
The only assumption Lightning makes is that the data is returned as an iterable of batches.
The only assumption Lightning makes is that a valid iterable is provided.
For PyTorch-based programs, these iterables are typically instances of :class:`~torch.utils.data.DataLoader`.
However, Lightning also supports other data types such as plain list of batches, generators or other custom iterables.
However, Lightning also supports other data types such as a list of batches, generators, or other custom iterables or
collections of the former.
.. code-block:: python
@ -20,13 +21,24 @@ However, Lightning also supports other data types such as plain list of batches,
trainer = Trainer()
trainer.fit(model, data)
Examples for custom iterables include `NVIDIA DALI <https://github.com/NVIDIA/DALI>`__ or `FFCV <https://github.com/libffcv/ffcv>`__ for computer vision.
Both libraries offer support for custom data loading and preprocessing (also hardware accelerated) and can be used with Lightning.
Below we showcase Lightning examples with packages that compete with the generic PyTorch DataLoader and might be
faster depending on your use case. They might require custom data serialization, loading, and preprocessing that
is often hardware accelerated.
.. TODO(carmocca)
StreamingDataset
^^^^^^^^^^^^^^^^
For example, taking the example from FFCV's readme, we can use it with Lightning by just removing the hardcoded ``ToDevice(0)``
as Lightning takes care of GPU placement. In case you want to use some data transformations on GPUs, change the
``ToDevice(0)`` to ``ToDevice(self.trainer.local_rank)`` to correctly map to the desired GPU in your pipeline.
The `StreamingDataset <https://github.com/mosaicml/streaming>`__
FFCV
^^^^
Taking the example from the `FFCV <https://github.com/libffcv/ffcv>`__ readme, we can use it with Lightning
by just removing the hardcoded ``ToDevice(0)`` as Lightning takes care of GPU placement. In case you want to use some
data transformations on GPUs, change the ``ToDevice(0)`` to ``ToDevice(self.trainer.local_rank)`` to correctly map to
the desired GPU in your pipeline. When moving data to a specific device, you can always refer to
``self.trainer.local_rank`` to get the accelerator used by the current process.
.. code-block:: python
@ -54,8 +66,15 @@ as Lightning takes care of GPU placement. In case you want to use some data tran
return loader
When moving data to a specific device, you can always refer to ``self.trainer.local_rank`` to get the accelerator
used by the current process.
.. TODO(carmocca)
WebDataset
^^^^^^^^^^
The `WebDataset <https://webdataset.github.io/webdataset>`__
NVIDIA DALI
^^^^^^^^^^^
By just changing ``device_id=0`` to ``device_id=self.trainer.local_rank`` we can also leverage DALI's GPU decoding:
@ -107,8 +126,8 @@ Lightning works with all kinds of custom data iterables as shown above. There ar
be supported this way. These restrictions come from the fact that for their support,
Lightning needs to know a lot on the internals of these iterables.
- In a distributed multi-GPU setting (ddp),
Lightning automatically replaces the DataLoader's sampler with its distributed counterpart.
This makes sure that each GPU sees a different part of the dataset.
As sampling can be implemented in arbitrary ways with custom iterables,
there is no way for Lightning to know, how to replace the sampler.
- In a distributed multi-GPU setting (ddp), Lightning wraps the DataLoader's sampler with a wrapper for distributed
support. This makes sure that each GPU sees a different part of the dataset. As sampling can be implemented in
arbitrary ways with custom iterables, Lightning might not be able to do this for you. If this is the case, you can use
the :paramref:`~lightning.pytorch.trainer.trainer.Trainer.use_distributed_sampler` argument to disable this logic and
set the distributed sampler yourself.

View File

@ -0,0 +1,54 @@
.. _data:
Complex data uses
=================
.. raw:: html
<div class="display-card-container">
<div class="row">
.. displayitem::
:header: LightningDataModules
:description: Introduction to the LightningDataModule
:col_css: col-md-4
:button_link: datamodule.html
:height: 150
:tag: basic
.. displayitem::
:header: Iterables
:description: What is an iterable? How do I use them?
:col_css: col-md-4
:button_link: iterables.html
:height: 150
:tag: basic
.. displayitem::
:header: Access your data
:description: How to access your dataloaders
:col_css: col-md-4
:button_link: access.html
:height: 150
:tag: basic
.. displayitem::
:header: Streaming datasets
:description: Using iterable-style datasets with Lightning
:col_css: col-md-4
:button_link: streaming.html
:height: 150
:tag: intermediate
.. displayitem::
:header: Faster DataLoaders
:description: How alternative dataloader projects can be used with Lightning
:col_css: col-md-4
:button_link: alternatives.html
:height: 150
:tag: advanced
.. raw:: html
</div>
</div>

View File

@ -25,8 +25,6 @@ This class can then be shared and used anywhere:
.. code-block:: python
from pl_bolts.datamodules import CIFAR10DataModule, ImagenetDataModule
model = LitClassifier()
trainer = Trainer()
@ -56,8 +54,11 @@ Datamodules are for you if you ever asked the questions:
*********************
What is a DataModule?
*********************
A DataModule is simply a collection of a train_dataloader(s), val_dataloader(s), test_dataloader(s) and
predict_dataloader(s) along with the matching transforms and data processing/downloads steps required.
The :class:`~lightning.pytorch.core.datamodule.LightningDataModule` is a convenient way to manage data in PyTorch Lightning.
It encapsulates training, validation, testing, and prediction dataloaders, as well as any necessary steps for data processing,
downloads, and transformations. By using a :class:`~lightning.pytorch.core.datamodule.LightningDataModule`, you can
easily develop dataset-agnostic models, hot-swap different datasets, and share data splits and transformations across projects.
Here's a simple PyTorch example:
@ -411,7 +412,10 @@ the method runs on the correct devices).
trainer.test(datamodule=dm)
You can access the current used datamodule of a trainer via ``trainer.datamodule`` and the current used
dataloaders via ``trainer.train_dataloader``, ``trainer.val_dataloaders`` and ``trainer.test_dataloaders``.
dataloaders via the trainer properties :meth:`~lightning.pytorch.trainer.trainer.Trainer.train_dataloader`,
:meth:`~lightning.pytorch.trainer.trainer.Trainer.val_dataloaders`,
:meth:`~lightning.pytorch.trainer.trainer.Trainer.test_dataloaders`, and
:meth:`~lightning.pytorch.trainer.trainer.Trainer.predict_dataloaders`.
----------------

View File

@ -0,0 +1,93 @@
:orphan:
Arbitrary iterable support
==========================
Python iterables are objects that can be iterated or looped over. Examples of iterables in Python include lists and dictionaries.
In PyTorch, a :class:`torch.utils.data.DataLoader` is also an iterable which typically retrieves data from a :class:`torch.utils.data.Dataset` or :class:`torch.utils.data.IterableDataset`.
The :class:`~lightning.pytorch.trainer.trainer.Trainer` works with arbitrary iterables, but most people will use a :class:`torch.utils.data.DataLoader` as the iterable to feed data to the model.
.. _multiple-dataloaders:
Multiple Iterables
------------------
In addition to supporting arbitrary iterables, the ``Trainer`` also supports arbitrary collections of iterables. Some examples of this are:
.. code-block:: python
return DataLoader(...)
return list(range(1000))
# pass loaders as a dict. This will create batches like this:
# {'a': batch_from_loader_a, 'b': batch_from_loader_b}
return {"a": DataLoader(...), "b": DataLoader(...)}
# pass loaders as list. This will create batches like this:
# [batch_from_dl_1, batch_from_dl_2]
return [DataLoader(...), DataLoader(...)]
# {'a': [batch_from_dl_1, batch_from_dl_2], 'b': [batch_from_dl_3, batch_from_dl_4]}
return {"a": [dl1, dl2], "b": [dl3, dl4]}
Lightning automatically collates the batches from multiple iterables based on a "mode". This is done with our
:class:`~lightning.pytorch.utilities.combined_loader.CombinedLoader` class.
The list of modes available can be found by looking at the :paramref:`~lightning.pytorch.utilities.combined_loader.CombinedLoader.mode` documentation.
By default, the ``"max_size_cycle"`` mode is used during training and the ``"sequential"`` mode is used during validation, testing, and prediction.
To choose a different mode, you can use the :class:`~lightning.pytorch.utilities.combined_loader.CombinedLoader` class directly with your mode of choice:
.. code-block:: python
from lightning.pytorch.utilities import CombinedLoader
iterables = {"a": DataLoader(), "b": DataLoader()}
combined_loader = CombinedLoader(iterables, mode="min_size")
model = ...
trainer = Trainer()
trainer.fit(model, combined_loader)
Currently, ``trainer.validate``, ``trainer.test``, and ``trainer.predict`` methods only support the ``"sequential"`` mode, while ``trainer.fit`` method does not support it.
Support for this feature is tracked in this `issue <https://github.com/Lightning-AI/lightning/issues/16830>`__.
Note that when using the ``"sequential"`` mode, you need to add an additional argument ``dataloader_idx`` to some specific hooks.
Lightning will `raise an error <https://github.com/Lightning-AI/lightning/pull/16837>`__ informing you of this requirement.
Using LightningDataModule
-------------------------
You can set more than one :class:`~torch.utils.data.DataLoader` in your :class:`~lightning.pytorch.core.datamodule.LightningDataModule` using its DataLoader hooks
and Lightning will use the correct one.
.. testcode::
class DataModule(LightningDataModule):
def train_dataloader(self):
# any iterable or collection of iterables
return DataLoader(self.train_dataset)
def val_dataloader(self):
# any iterable or collection of iterables
return [DataLoader(self.val_dataset_1), DataLoader(self.val_dataset_2)]
def test_dataloader(self):
# any iterable or collection of iterables
return DataLoader(self.test_dataset)
def predict_dataloader(self):
# any iterable or collection of iterables
return DataLoader(self.predict_dataset)
Using LightningModule Hooks
---------------------------
The exact same code as above works when overriding :class:`~lightning.pytorch.core.module.LightningModule`
Passing the iterables to the Trainer
------------------------------------
The same support for arbitrary iterables, or collection of iterables applies to the dataloader arguments of
:meth:`~lightning.pytorch.trainer.trainer.Trainer.fit`, :meth:`~lightning.pytorch.trainer.trainer.Trainer.validate`,
:meth:`~lightning.pytorch.trainer.trainer.Trainer.test`, :meth:`~lightning.pytorch.trainer.trainer.Trainer.predict`

View File

@ -1,385 +0,0 @@
:orphan:
.. _data:
#############
Managing Data
#############
****************************
Data Containers in Lightning
****************************
There are a few different data containers used in Lightning:
.. list-table:: Data objects
:widths: 20 80
:header-rows: 1
* - Object
- Definition
* - :class:`~torch.utils.data.Dataset`
- The PyTorch :class:`~torch.utils.data.Dataset` represents a map from keys to data samples.
* - :class:`~torch.utils.data.IterableDataset`
- The PyTorch :class:`~torch.utils.data.IterableDataset` represents a stream of data.
* - :class:`~torch.utils.data.DataLoader`
- The PyTorch :class:`~torch.utils.data.DataLoader` represents a Python iterable over a Dataset.
* - :class:`~lightning.pytorch.core.datamodule.LightningDataModule`
- A :class:`~lightning.pytorch.core.datamodule.LightningDataModule` is simply a collection of: training DataLoader(s), validation DataLoader(s), test DataLoader(s) and predict DataLoader(s), along with the matching transforms and data processing/downloads steps required.
Why Use LightningDataModule?
============================
The :class:`~lightning.pytorch.core.datamodule.LightningDataModule` was designed as a way of decoupling data-related hooks from the :class:`~lightning.pytorch.core.module.LightningModule` so you can develop dataset agnostic models. The :class:`~lightning.pytorch.core.datamodule.LightningDataModule` makes it easy to hot swap different Datasets with your model, so you can test it and benchmark it across domains. It also makes sharing and reusing the exact data splits and transforms across projects possible.
Read :ref:`this <datamodules>` for more details on LightningDataModule.
---------
.. _multiple-dataloaders:
*****************
Multiple Datasets
*****************
There are a few ways to pass multiple Datasets to Lightning:
1. Create a DataLoader that iterates over multiple Datasets under the hood.
2. In the training loop, you can pass multiple DataLoaders as a dict or list/tuple, and Lightning will
automatically combine the batches from different DataLoaders.
3. In the validation, test, or prediction, you have the option to return multiple DataLoaders as list/tuple, which Lightning will call sequentially
or combine the DataLoaders using :class:`~lightning.pytorch.utilities.CombinedLoader`, which is what Lightning uses
under the hood.
Using LightningDataModule
=========================
You can set more than one :class:`~torch.utils.data.DataLoader` in your :class:`~lightning.pytorch.core.datamodule.LightningDataModule` using its DataLoader hooks
and Lightning will use the correct one.
.. testcode::
class DataModule(LightningDataModule):
...
def train_dataloader(self):
return DataLoader(self.train_dataset)
def val_dataloader(self):
return [DataLoader(self.val_dataset_1), DataLoader(self.val_dataset_2)]
def test_dataloader(self):
return DataLoader(self.test_dataset)
def predict_dataloader(self):
return DataLoader(self.predict_dataset)
Using LightningModule Hooks
===========================
Concatenated Dataset
--------------------
For training with multiple Datasets, you can create a :class:`~torch.utils.data.DataLoader` class
which wraps your multiple Datasets using :class:`~torch.utils.data.ConcatDataset`. This, of course,
also works for testing, validation, and prediction Datasets.
.. testcode::
from torch.utils.data import ConcatDataset
class LitModel(LightningModule):
def train_dataloader(self):
concat_dataset = ConcatDataset(datasets.ImageFolder(traindir_A), datasets.ImageFolder(traindir_B))
loader = DataLoader(
concat_dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True
)
return loader
def val_dataloader(self):
# SAME
...
def test_dataloader(self):
# SAME
...
Return Multiple DataLoaders
---------------------------
You can set multiple DataLoaders in your :class:`~lightning.pytorch.core.module.LightningModule`, and Lightning will take care of batch combination.
.. testcode::
class LitModel(LightningModule):
def train_dataloader(self):
loader_a = DataLoader(range(6), batch_size=4)
loader_b = DataLoader(range(15), batch_size=5)
# pass loaders as a dict. This will create batches like this:
# {'a': batch from loader_a, 'b': batch from loader_b}
loaders = {"a": loader_a, "b": loader_b}
# OR:
# pass loaders as sequence. This will create batches like this:
# [batch from loader_a, batch from loader_b]
loaders = [loader_a, loader_b]
return loaders
Furthermore, Lightning also supports nested lists and dicts (or a combination).
.. testcode::
class LitModel(LightningModule):
def train_dataloader(self):
loader_a = DataLoader(range(8), batch_size=4)
loader_b = DataLoader(range(16), batch_size=2)
return {"a": loader_a, "b": loader_b}
def training_step(self, batch, batch_idx):
# access a dictionary with a batch from each DataLoader
batch_a = batch["a"]
batch_b = batch["b"]
.. testcode::
class LitModel(LightningModule):
def train_dataloader(self):
loader_a = DataLoader(range(8), batch_size=4)
loader_b = DataLoader(range(16), batch_size=4)
loader_c = DataLoader(range(32), batch_size=4)
loader_c = DataLoader(range(64), batch_size=4)
# pass loaders as a nested dict. This will create batches like this:
loaders = {"loaders_a_b": [loader_a, loader_b], "loaders_c_d": {"c": loader_c, "d": loader_d}}
return loaders
def training_step(self, batch, batch_idx):
# access the data
batch_a_b = batch["loaders_a_b"]
batch_c_d = batch["loaders_c_d"]
batch_a = batch_a_b[0]
batch_b = batch_a_b[1]
batch_c = batch_c_d["c"]
batch_d = batch_c_d["d"]
Alternatively, you can also pass in a :class:`~lightning.pytorch.utilities.CombinedLoader` containing multiple DataLoaders.
.. testcode::
from lightning.pytorch.utilities import CombinedLoader
def train_dataloader(self):
loader_a = DataLoader()
loader_b = DataLoader()
loaders = {"a": loader_a, "b": loader_b}
combined_loader = CombinedLoader(loaders, mode="max_size_cycle")
return combined_loader
def training_step(self, batch, batch_idx):
batch_a = batch["a"]
batch_b = batch["b"]
Multiple Validation/Test/Predict DataLoaders
============================================
For validation, test and predict DataLoaders, you can pass a single DataLoader or a list of them. This optional named
parameter can be used in conjunction with any of the above use cases. You can choose to pass
the batches sequentially or simultaneously, as is done for the training step.
The default mode for these DataLoaders is sequential. Note that when using a sequence of DataLoaders you need
to add an additional argument ``dataloader_idx`` in their corresponding step specific hook. The corresponding loop will process
the DataLoaders in sequential order; that is, the first DataLoader will be processed completely, then the second one, and so on.
Refer to the following for more details for the default sequential option:
- :meth:`~lightning.pytorch.core.hooks.DataHooks.val_dataloader`
- :meth:`~lightning.pytorch.core.hooks.DataHooks.test_dataloader`
- :meth:`~lightning.pytorch.core.hooks.DataHooks.predict_dataloader`
.. testcode::
def val_dataloader(self):
loader_1 = DataLoader()
loader_2 = DataLoader()
return [loader_1, loader_2]
def validation_step(self, batch, batch_idx, dataloader_idx):
...
Evaluation DataLoaders are iterated over sequentially. The above is equivalent to:
.. testcode::
from lightning.pytorch.utilities import CombinedLoader
def val_dataloader(self):
loader_a = DataLoader()
loader_b = DataLoader()
loaders = {"a": loader_a, "b": loader_b}
combined_loaders = CombinedLoader(loaders, mode="sequential")
return combined_loaders
def validation_step(self, batch, batch_idx):
batch_a = batch["a"]
batch_b = batch["b"]
Evaluate with Additional DataLoaders
====================================
You can evaluate your models using additional DataLoaders even if the DataLoader specific hooks haven't been defined within your
:class:`~lightning.pytorch.core.module.LightningModule`. For example, this would be the case if your test data
set is not available at the time your model was declared. Simply pass the test set to the :meth:`~lightning.pytorch.trainer.trainer.Trainer.test` method:
.. code-block:: python
# setup your DataLoader
test = DataLoader(...)
# test (pass in the loader)
trainer.test(dataloaders=test)
--------------
********************************************
Accessing DataLoaders within LightningModule
********************************************
In the case that you require access to the DataLoader or Dataset objects, DataLoaders for each step can be accessed using the ``Trainer`` object:
.. testcode::
from lightning.pytorch import LightningModule
class Model(LightningModule):
def test_step(self, batch, batch_idx, dataloader_idx):
test_dl = self.trainer.test_dataloaders[dataloader_idx]
test_dataset = test_dl.dataset
test_sampler = test_dl.sampler
...
# extract metadata, etc. from the dataset:
...
If you are using a :class:`~lightning.pytorch.utilities.CombinedLoader` object which allows you to fetch batches from a collection of DataLoaders
simultaneously which supports collections of DataLoader such as list, tuple, or dictionary. The DataLoaders can be accessed using the same collection structure:
.. code-block:: python
from lightning.pytorch.utilities import CombinedLoader
test_dl1 = ...
test_dl2 = ...
# If you provided a list of DataLoaders:
combined_loader = CombinedLoader([test_dl1, test_dl2])
list_of_loaders = combined_loader.iterables
test_dl1 = list_of_loaders.loaders[0]
# If you provided dictionary of DataLoaders:
combined_loader = CombinedLoader({"dl1": test_dl1, "dl2": test_dl2})
dictionary_of_loaders = combined_loader.iterables
test_dl1 = dictionary_of_loaders["dl1"]
--------------
.. _sequential-data:
***************
Sequential Data
***************
Lightning has built in support for dealing with sequential data.
Packed Sequences as Inputs
==========================
When using :class:`~torch.nn.utils.rnn.PackedSequence`, do two things:
1. Return either a padded tensor in dataset or a list of variable length tensors in the DataLoader's `collate_fn <https://pytorch.org/docs/stable/data.html#dataloader-collate-fn>`_ (example shows the list implementation).
2. Pack the sequence in forward or training and validation steps depending on use case.
|
.. testcode::
# For use in DataLoader
def collate_fn(batch):
x = [item[0] for item in batch]
y = [item[1] for item in batch]
return x, y
# In LightningModule
def training_step(self, batch, batch_idx):
x = rnn.pack_sequence(batch[0], enforce_sorted=False)
y = rnn.pack_sequence(batch[1], enforce_sorted=False)
Iterable Datasets
=================
Lightning supports using :class:`~torch.utils.data.IterableDataset` as well as map-style Datasets. IterableDatasets provide a more natural
option when using sequential data.
.. note:: When using an :class:`~torch.utils.data.IterableDataset` you must set the ``val_check_interval`` to 1.0 (the default) or an int
(specifying the number of training batches to run before each validation loop) when initializing the Trainer. This is
because the IterableDataset does not have a ``__len__`` and Lightning requires this to calculate the validation
interval when ``val_check_interval`` is less than one. Similarly, you can set ``limit_{mode}_batches`` to a float or
an int. If it is set to 0.0 or 0, it will set ``num_{mode}_batches`` to 0, if it is an int, it will set ``num_{mode}_batches``
to ``limit_{mode}_batches``, if it is set to 1.0 it will run for the whole dataset, otherwise it will throw an exception.
Here ``mode`` can be train/val/test/predict.
When iterable datasets are used, Lightning will pre-fetch 1 batch (in addition to the current batch) so it can detect
when the training will stop and run validation if necessary.
.. testcode::
# IterableDataset
class CustomDataset(IterableDataset):
def __init__(self, data):
self.data_source = data
def __iter__(self):
return iter(self.data_source)
# Setup DataLoader
def train_dataloader(self):
seq_data = ["A", "long", "time", "ago", "in", "a", "galaxy", "far", "far", "away"]
iterable_dataset = CustomDataset(seq_data)
dataloader = DataLoader(dataset=iterable_dataset, batch_size=5)
return dataloader
.. testcode::
# Set val_check_interval
trainer = Trainer(val_check_interval=100)
# Set limit_val_batches to 0.0 or 0
trainer = Trainer(limit_val_batches=0.0)
# Set limit_val_batches as an int
trainer = Trainer(limit_val_batches=100)

View File

@ -206,7 +206,7 @@ Current Lightning Users
Train on single or multiple TPUs <accelerators/tpu>
Train on MPS <accelerators/mps>
Use a pretrained model <advanced/pretrained>
Inject Custom Data Iterables <data/custom_data_iterables>
data/data
model/own_your_loop
.. toctree::
@ -233,7 +233,6 @@ Current Lightning Users
Lightning CLI <cli/lightning_cli>
LightningDataModule <data/datamodule>
LightningModule <common/lightning_module>
Lightning Transformers <https://pytorch-lightning.readthedocs.io/en/stable/ecosystem/transformers.html>
Log <visualize/loggers>
TPU <accelerators/tpu>
Metrics <https://torchmetrics.readthedocs.io/en/stable/>